-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Add experimental reporting for large projects via paging #516
Conversation
you can see things setup on this example project but it uses ENV vars to toggle between various options https://github.com/danmayer/coverband_rails_example/blob/main/config/coverband.rb |
@danmayer I tested it on local, but it didn't work, as we mount the route in a subdirectory: mount Coverband::Reporters::WebPager.new, at: '/config/coverage' It tries to download Although when I moved the route to When it works on custom routes, I'll need to test it on staging that has way more data. |
ahh ok thanks @frsantos I can fix the bug that should work when mounted at any arbitrary URL. Let me know how the staging tests go |
ok @frsantos the release 6.0.3.rc.2 has a fix for the relative mount path |
Hi @danmayer . It loads now, and I can see it paginating some thousands of files. But the rows are not clickable nor they have any colour for their coverage. |
as part of this change I bumped the storage version key.... You might need to run and collect data again it will start empty. If you view some of the paged data via the network browser does it show pages coming in with coverage data? |
ah never mind, I see the problem when I switch to the most recent paging technique that fixed batch loading I broke all the links... I will fix soon. |
ok, I fixed the linking and file loading, but found another bug in the way it loads eager and runtime coverage, which will take a bit more time to fix. |
I attempted to try this out, but I could not because ruby 2.7+ is required. I'm embarrassed to say the large project I'm working on is still 2.6. |
sorry @SyntaxRules-avant we recently dropped support for 2.6, I kind of doubt I will bring that back... I know that many folks using coverband are on older legacy systems so we do try to allow for a very long deprecation cycle but it does get hard to support newer capabilities without eventually dropping older versions. OK @frsantos the branch is pretty hacky at the moment, but I want to make sure it solves the problem before I make this more solid...
All that being said, I think this should work and show that it can load your data much faster and many of the improvements will carry over even to the none paging version of the coverage report, so it should be a nice win when I finish cleaning it up. Let me know if you can give it a try and gather some data to load the large project. |
@danmayer Its totally understandable that you upped the requirement to 2.7. I think I found a way forward for an upgrade this week. Hopefully I'll be back to this issue in a couple of weeks. |
Thanks @danmayer, it looks promising 🚀 One small thing I noticed: when clicking on a file, the request for recovering the data is performed twice. |
cool, yeah the current JS is a bit of a hack as I wanted to make sure it would load on projects of your size before fully investing in this approach. It sounds like it was able to load much faster for you, I will try to clean this all up and work on a real release soon. |
coverband.gemspec
Outdated
@@ -39,6 +39,7 @@ Gem::Specification.new do |spec| | |||
spec.add_development_dependency "minitest-fork_executor" | |||
spec.add_development_dependency "minitest-stub-const" | |||
spec.add_development_dependency "mocha", "~> 1.7.0" | |||
# spec.add_development_dependency "spy" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
going to fix t his in a future PR as it was lost in a merge conflict
…ing, remove consoles
@frsantos I think this might finally be wrapped up, before I merge it to main and get a full release do you want to try out the RC and give any other feedback? Things should all page in reasonably quickly and no calls should take to long individually. Things like deep linking, filtering, sorting, cell styling should all be working again. If you see anything odd or not working let me know. There is still no summary data, as the server can't calculate that without the full data set. It also can't list files that are never loaded (IE on the filesystem but never loaded into the Ruby VM). |
As I add in paging support, I am considering backing out your caching background layer to simplify things and put more effort into this paging approach. @Drowze, you submitted that work. Would you be willing to try this RC release and see if it works well for your project? I think for long term support and growth it might be a better approach than trying to just cache everything in the background. If you do see issues on your project let me know as I think there are a few other improvements I could make... I just don't have any real world project that seems to have issues with the non-paging loading. |
Hey @danmayer! Thank you for the great work and quoting me here! I'm more than happy test it! 😄 I just tried it with the project I work on - to do that I simply needed to add For this trial, I simply ran RSpec with Coverband enabled on a few tests, then tried to load the report - and here's my first impressions:
For a more real test: after Wednesday I think I'll be able to try this change in our company testing environment - then I'll try to leave it there for a couple days and see how the coverage page performs. I expect to post my results here by the end of next week - would you mind waiting for that before removing the caching layer? But that's it for now. It feels like a big improvement, thanks @danmayer! 😄 |
Thanks so much for all the feedback this was really helpful. @Drowze on the first issue I am confused, it shouldn't be trying to use the coverband service adapter unless you explicitly selected that adapter or you have a service config file:
I think it was checking the constant that shouldn't exist at all if one wasn't configuring... Anyways, I think I fixed that issue but I am confused how it would just now occur... As those code paths should have existed for awhile. OK for the slow page, unresponsive UI, etc I think I am going to let it do all the paging behind the scenes then render the data when it is all done. This means the pages don't stall out by changing the dom as often. The UI in theory can do paging but the filters and sorts then only work per page which makes it much harder to actually use those to find what you might be looking for. I was able to cut another redis call off the paged load, but I have about 700ms for 250 files of data per page. I will clean this up a bit more, but I think it might be good enough to ship now. |
After having it collecting coverage data in our internal testing environment (we're using this fork which is just this branch when I last commented on it + a fix for the first issue I pointed), it seems to me that this change is fairly good and I think we'll adopt to it soon 😄 One very strange thing happened though: every The sequence of events traced by our Datadog tracer is something like:
I'd have to get a snapshot of the redis database to further debug it locally but that's what I have for now. Hopefully this piece of data can give you a clue on what could be causing this slow down 🤔 |
odd on the extra time... not sure there each page on my machine was about 700ms. I want to add some tracking in my tests to ensure I never increase the number of redis queries per number of files more than expected, so I will try to get that added into the tests and see if there are any other optimizations I can make. |
The main release does have a fix that you pathed on your branch so you should be good to try out |
I know what is going on with that long wait time @Drowze I am working on a fix that should speed things up a good deal. |
ok @Drowze I just released 6.1.1 this fixes that slowdown issue... It has my local test app with 6500 files loading the paging data down from 700ms to 40ms per page. It all loads super snappy now |
Just tested the latest 6.1.1 and it is fantastically fast now @danmayer! Thanks a lot for the hard work on it! :D Found one issue, but I'm loving it already. Here's my feedback for it: The good:
The bad:
Click to expand the attached screenshots |
interesting... definitely a bug and it should get faster once I sort out what is going on. I have some ideas of where the total file count and the number of files return could get out of sync |
blah... I think I know what to look at / fix but me and my family all just came down with something so it will likely be a few days until I get to it. |
ok, I found a few reasons the static count and the full coverage array can be different or off by a few... for example if the hash of a file has changed but we have yet to record any coverage for the new version of the file... Anyways, I found a good halting condition so it will only ever over fetch by one page which lets the front end know it has pulled all the data. so try out |
Hey @danmayer! Thanks again for the hard work put on this gem! 😄 I just tried the new
Running it locally (i.e.: with negligible Redis server latency) is impressively fast! I haven't talked about this before, but the memory usage now feels much much more efficient - my browser doesn't choke at all trying to load this big page anymore (something that used to happen quite a bit before these changes... Also for curiosity, this is a problem present in the By the way, with this new paging, I am now fully happy to ditch away the caching logic I implemented earlier - that will not be missed 😄 Two minor questions now:
[1]: I tested it locally by downloading a redis snapshot from our testing environment and importing it locally. |
yeah I could expose a configurable page size. I just tried a few options and decided on it as a good default. I think for now I will keep it private just to keep things simpler. They difference would be for files we are aware of but have no runtime coverage, or for files that have changed but have yet to receive coverage on the latest version of the file. This was one of the trade offs of going with the paging that I didn't come up with a good way to handle the never hit files as we page through all the files that do have coverage... I could likely think of something to cover that case it might need to be something like a one off endpoint that can calculate just that... or it builds up the cache over paging and sends a final page of data which is the uncovered files. |
thanks again so much for your testing. I will cut this release, and at some point clear up the caching... Yeah much of the original view layer in coverband came from simplecov but the way things work has changed fairly significantly under the hood for awhile... I am not sure how much work it would take to get this ported into simplecov, but I am guessing it would be pretty significant. |
This should address issue #484 and allow for much larger web reports to be displayed even when on resources that limit request time to 30s per request. This change adds a paged json report and a poll method that will load the report page by page until the full report is available.
TODO: before main release
Instructions to use:
gem "coverband", "6.0.3.rc.1"
config.store = Coverband::Adapters::HashRedisStore.new(Redis.new(url: ENV['REDIS_URL'] || 'redis://localhost:6379/0'))
mount Coverband::Reporters::WebPager.new, at: '/coverage'