Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Preserve granularity of payload resync notices #2287

Closed

Conversation

nvmkuruc
Copy link
Collaborator

@nvmkuruc nvmkuruc commented Feb 16, 2023

Description of Change(s)

  • Update the ancestral payload discovery code to more explicitly find prims that have payloads authored but have not been loaded. This prevents the issue where the parents of loaded prims were being unnecessarily flagged as needing a resync.
  • Add basic test coverage to validate expected payload resync granularity

Fixes Issue(s)

  • I have verified that all unit tests pass with the proposed changes
  • I have submitted a signed Contributor License Agreement

@nvmkuruc nvmkuruc force-pushed the ancestralpayload branch 2 times, most recently from 970cd7b to fdbc681 Compare February 16, 2023 22:23
@nvmkuruc nvmkuruc changed the title Fix granularity of payload resync notices Preserve granularity of payload resync notices Feb 16, 2023
@tallytalwar
Copy link
Contributor

Filed as internal issue #USD-8024

pixar-oss pushed a commit that referenced this pull request Apr 3, 2023
recompose.  This code was conflating a desire to avoid redundantly reinserting
a path into `finalLoadSet` with the stop condition of finding a loaded
ancestor.  Now we always terminate when we find a loaded ancestor, and
check separately for redundant reinsertion.

Fixes #2287

(Internal change: 2269249)
@nvmkuruc
Copy link
Collaborator Author

@gitamohr has landed a more targeted fix. Closing.

@nvmkuruc nvmkuruc closed this Apr 17, 2023
@nvmkuruc nvmkuruc deleted the ancestralpayload branch December 29, 2023 03:09
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants