-
Notifications
You must be signed in to change notification settings - Fork 2.6k
More granular locking in cargo_rustc #4282
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Comments
As part of #5931, we've talked about changing the target directory layout so that there is a directory per package. This would give us more definitive units to lock. |
In #5931, we discuss having a directory per intermediate artifact inside of a user-wide cache directory. Each intermediate artifact would be lockable so there is exclusive access on the initial write and then multiple readers after that that prevent during builds. Depending on packages out of this shared location is important so we don't waste time copying things out. To keep things simple, ideally we model the target directory the same way. So in a way, fixing this issue is a perfect subset of #5931. We'd need a separate lock for the artifact directory though Any compilation that changes the fingerprint but not the Therefore, the benefit of this feature would be limited to one of
|
I think it would be good to also keep on eye on performance, since this could introduce more OS overhead. I'm also uncertain if we will run into lock limits, particularly on networked filesystems or filesystems like Docker, etc. It looks like the number of locks is configurable in Linux, but I don't immediately see any strict limits anywhere else. |
Any updates? |
One important aspect of this (for me at least) is prioritizing user requests over background requests (for example from LSPs). Building Zed consumes all my CPU cores, and I'd like the rust-analyzer diagnostics to get out of my way when I run |
### What does this PR try to resolve? While doing some investigation on the theoretical performance implications of #4282 (and #15010 by extension) I was profiling cargo with some experimental changes. (Still a work in progress) But in the mean time, noticed that we do not have spans for rustc invocations. I think these would be useful when profiling `cargo build`. (`cargo build --timing` exists but is more geared towards debugging a slow building project, not cargo itself) For reference below is an example before/after of a profile run of a dummy crate with a few random dependencies. #### Before  #### After 
For GC, Cargo has extended its file lock mechanism with effectively the semantics of Another potential level to this problem is that we grab a Read lock for the workspace build directory and write only to a process build directory. Once we are done, we then move over any parts of the process build directory that weren't already in the workspace build directory. Risks include (1) race conditions (2) builds would happen in parallel, rather than serialized, potentially making the machine grind to a halt (currently |
Right now whenever a build happens we lock the entirety of the
target
directory for the whole build, but it may be possible for us to have a more granular locking strategy which allows multiple instances of Cargo to proceed in parallel instead of serializing them.The text was updated successfully, but these errors were encountered: