-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
feat!: allow passing down existing dataset for write #3119
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #3119 +/- ##
==========================================
+ Coverage 77.87% 77.92% +0.05%
==========================================
Files 240 242 +2
Lines 81630 81903 +273
Branches 81630 81903 +273
==========================================
+ Hits 63566 63825 +259
- Misses 14830 14860 +30
+ Partials 3234 3218 -16
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
5bca1f8
to
ea24413
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some questions, thanks!
} | ||
|
||
/// Whether to use move-stable row ids. This makes the `_rowid` column stable | ||
/// after compaction, but not updates. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
even during re-compaction?
interesting, not updates, will update create new rows with new _rowid?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even during re-compaction, yeah. We don't keep them during updates because that requires invalidating the secondary indices.
You can read more in the design docs:
- Move-stable row ids
- Primary keys - when we make row ids stable after updates too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Will! will take a look at the design docs!
/// | ||
/// This can be used to stage changes or to handle "secondary" datasets | ||
/// whose lineage is tracked elsewhere. | ||
pub fn with_detached(mut self, detached: bool) -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
interesting, sounds like git
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
anybody is using this feature or why this feature is asked?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's kind of like git. Though there only one branch, and everything else is detached.
Weston added this feature to support "balanced storage" for large blob columns. Basically we used this to create a separate hidden dataset that stores the blob data, and by doing this we could compact that data at a different rate than other columns.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, may be with 2.1 we don't need to have this kind of detached dataset for blob?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually it was designed with 2.1 in mind. The ideal rows per file differs so much between small and wide columns that they need to essentially be in different datasets to offer good enough OLAP and random access performance. We wrap it up quite seamlessly though, so for the most part it feels like just another column.
/// Pass an object store registry to use. | ||
/// | ||
/// If an object store is passed, this registry will be ignored. | ||
pub fn with_object_store_registry( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: is there a macro we can use to generate this builder boilerplate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could maybe write a macro, but doesn't seem that worth it to me.
BREAKING CHANGE: return value in Rust of
write_fragments()
has changed toResult<Transaction>
.write_fragments
andDataset::commit()
#3058InsertBuilder
andCommitBuilder
.CommitBuilder
.LanceDataset.insert()
method to modify existing datasets.Table.add
resetsindex_cache_size
lancedb#1655