-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
17X dump parsing performance regression since commit 35b3ff7 #57
Comments
(1) I'd be interested to see if you can run a build with just this reverted? (2) I wonder if you can run a job with async-profiler and capture a full wall-time and CPU flamegraph of the run? 9 hours is an extraordinarily long time. 77 million boolean[] is 77MB, scanning 77MB to count 1 vs 0 should not be a 9 hour job! A flamegraph and/or output with line numbers would be perfect. I can see this loop, maybe we're seeing some particular behavior triggered here. (3) Is there any chance you can share a sample hprof? |
Thanks. Sure, I'll get as much of that for you as I can, probably tomorrow. I only took thread dumps for a subset of the whole parse, so maybe ObjectMarker.countMarked is only part of the regression. Unfortunately, I can't download anything from these systems, so I can't share any dumps from it, but I can run debug logging and extract out snippets if we need to go down that path. I'll look into async-profiler... |
Also, timestamps of the log parser log would be good, if you can capture them |
Just a note that this might take me some time because currently our internal build of MAT has some patches which requires running with IBM Semeru Runtimes and async-profiler doesn't support the OpenJ9 JVM. For some time, I've wanted to make our internal build not require OpenJ9 but that will take a bit of work... However, I will also try running without commit 7c2cb09 and get back to you on that... |
Got it. I think a profile would help. I think a profile would be very helpful. Sending across a hprof (redacted) might be helpful too, MAT has the export redacted=FULL option which might be a viable reproduction? |
I've been very busy with other work but finally getting back around to this. It turns out IBM Semeru Runtimes does work with async-profiler (despite what it says in the README); however, async-profiler is not available on Windows which is where this issue is occurring. I've added in the IBM Health Center sampling profiler (similar to JFR) and I'm parsing the dump now.
Unfortunately all downloading is disabled in these environments. We can only copy/paste small text output or images. I should be able to get a flamegraph from the HealthCenter data. By default, it takes thread dumps every 30 seconds. |
I reproduced the 15 hour parse with the profiler. For the CPU samples, ~93% are in
For the 30 second thread dumps, here's the flame graph which shows a similar thing: |
Attaching an interactive SVG to the GitHub issue seems to strip/restrict the SVG capabilities somehow, but it looks to work when uploaded to a repository: https://raw.githubusercontent.com/kgibm/ExampleFlameGraphs/main/flamegraph.svg |
Hmm, no, the |
I'm interested to know if this is resolved in #63, though I suspect not. |
@jasonk000 Thanks. I've been busy and I'm also in the process of some required upgrades to our internal tooling, but I hope to test out #63 soon. |
I finally got back around to this. The original PHD file was no longer available but I found another large one. Unfortunately, I could not reproduce the original issue when running on the suspect commits. I guess I'll just roll out the latest commits and ask our support teams to keep a watch for suspiciously long parse times and I'll re-open if needed. |
@jasonk000,
Last week, we rolled out a new build of MAT from the latest commit to IBM support servers and we're getting reports of very slow parses up to 9 hours. The issue is reproducible and the majority of time is spent in:
Doing some basic profiling with thread dumps every 30 seconds during this step shows a flame graph with most samples in
ObjectMarker.countMarked
(the percentage on each frame is of all threads in each thread dump though only theWorker-X
threads are relevant; the key point is that across 30 thread dumps, almost all Worker threads were inObjectMarker.countMarked
):Taking a dump of MAT itself during this step shows that the
boolean[] bits
array has about 77 million entries. I guess it's just incredibly slow to iterate over so many elements in our support server environment. The support servers are virtualized Windows servers. The boxes don't appear to be particularly loaded. 2 CPU sockets and 8 virtual processors, about 25% utilized. 256GB RAM.I then uploaded a test build of MAT from commit 35b3ff7 and re-parsed from scratch, and the parse completed in about 30 minutes.
I'll be reverting our MAT builds to that commit but let me know if you want me to run any debug builds as I can upload custom builds and I can re-try on this dump.
The text was updated successfully, but these errors were encountered: