-
-
Notifications
You must be signed in to change notification settings - Fork 18.6k
Status of io.sql.get_schema as public function? #9960
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Comments
I've used this (or something similar) in my As far as returning a |
It seems like the more fundamental issue --- at least in regards to the stackoverflow question --- is that
while
Is there a reason for this discrepancy? |
So confused.
Why the difference in dtypes? |
@artemyk Can you open a new issue for that? (to not hijack this thread) |
I think the point here is that there should be a way to create the database table from a DataFrame, without having to do a non-selection to obtain an empty dataframe to write. |
@jorisvandenbossche OK, i will open a new issue. I did come across this because I think its at the root of the dtypes being lost on an empty dataframe. And I do think that inserting an empty dataframe should insert the dtypes correctly. Though |
@artemyk yes, I agree writing an empty frame should also work (but more an issue with indexing, than with the sql code) @TomAugspurger I see you didn't use On returning an sqlalchemy |
@jorisvandenbossche Yeah, I didn't know about the That's a fair point about making adjustments to the generated schema. And getting the actual schema should just be a The order of arguments should be aligned with |
@jorisvandenbossche I would consider making a separate function to return a |
yes, I was also think that using a new name for Name suggestions? |
maybe something alluding on 'create statement', as 'schema' is actually a confusing name (in some databases this has another meaning, eg see also the Some possibilities (but I am not yet convinced ..)
|
other ideas: for returning the |
Another inconsistency to think about is that What about the following set of arguments to both
I propose eliminating/depreciating the |
@jorisvandenbossche status? |
@jorisvandenbossche satus? |
Will try to look at it next week, but no blocker for 0.17.0 |
status of this issue ? |
Are there any plans to implement @artemyk's suggestion of unifying the APIs of |
@mroeschke @jorisvandenbossche @artemyk Any updates on this? Looks like there's been no progress with this for a while now. |
someone would need to present a concrete proposal this has not been discussed in years |
@jreback If that's possible, I'd like to take this issue and present a proposal. |
Semi related question: is this issue the reason for which Pylance can't see this function in the E.g: if I do import pandas.io.sql as sql
foo = sql.get_schema(...) Pylance throws an error saying: "get_schema" is not a known member of module. Why is that? |
Also asking the same question as @wtfzambo.
|
+1 for public function to get SQLalchemy table schema from existing dataframe (without trying to inserting rows to literal table). |
At this moment,
pd.io.sql.get_schema
is not documented (not in the API docs, and not in the io docs). But, it is a potential useful function, so I think it would be good to be more explicit about its status (by mentioning it in the docs).However, there are some quirks about the function:
pd.io.sql.get_schema(frame, name, flavor='sqlite', keys=None, con=Non e, dtype=None)
-> flavor keyword in the third place, while we want to deprecate it (and this means you cannot doget_schema(df, 'name', engine)
, but always have to doget_schema(df, 'name', con=engine)
.Ideally this should have the same arguments (order) as
to_sql
(pd.io.sql.to_sql(frame, name, con, flavor='sqlite', schema=None, if_exists='fail', index=True, index_label=None, chunksize=None, dtype=None)
) (only chunksize is not relevant)to_sql
Table
instead of the string itself?That we maybe should first solve before making it more explicitely public?
Triggered by http://stackoverflow.com/questions/29749356/python-pandas-export-structure-only-no-rows-of-a-dataframe-to-sql/
The text was updated successfully, but these errors were encountered: