-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Support JSON extract path expression #20
Conversation
@heywhy Thank you very much, I am reviewing this now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My initial view: Very pleasant set of changes, good job and thank you for the contribution. Please can you review my comments and advise. Thank you again for your time.
@heywhy FYI, reconstructing branch as your changes were from 0.1.6 and included undesired cross merge from master 1.0.1 so history needs to be refactored. |
Reconstitution of WIP @ feature/gh-20-json-extract-path |
@heywhy Please retest branch Please note active_tables is removed, you should do the following instead,
As I have implemented delete_all support. |
@heywhy do not push to develop, use the same branch I published and revise this PR to use that branch to conform to merge requirements. |
That's a mistake on my end. I thought you wanted the commits squashed or something. I will close this PR and create a new PR from the other branch as everything from the other branch works as expected. |
Yes that would be good though name the branch with |
Also, can you have a part in the readme that tells contributors what merge strategy should be used so as to avoid a situation like mine? |
Previously, queries that check against JSON columns couldn't be used alongside this adapter. With the changes in this PR, we can now do this. see an example below:
Also, a function called
flush_tables
was added to the adapter module to allow us to drop all data in a repository considering some tests might pollute the database and we have no way to implement postgres-like partitioning at the moment.