-
-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Parallel upload? #1
Comments
Interesting, I am not sure how clients natively handle multipart form streams.
Perhaps, handling an interruption would be tricky. Server performance needs to be considered; one user could choke it up uploading a lot of files at once. Found this looking around: https://github.com/flowjs/flow.js. |
Oh wow, flowjs looks amazing, and it's only 1k!
https://unpkg.com/flowjs@1.0.0/lib/ Seems like a prime candidate for using
as the transport mechanism…
It also gives you access to the progress of the upload, so maybe that can
be provided on `mutation.progress` as a number or a callback…
…On Fri, Feb 24, 2017 at 4:15 PM Jayden Seric ***@***.***> wrote:
Interesting, I am not sure how clients natively handle multipart form
streams.
Or would that break a single-form-post assumption on the server?
Perhaps, handling an interruption would be tricky.
Server performance needs to be considered; one user could choke it up
uploading a lot of files at once.
Found this looking around: https://github.com/flowjs/flow.js.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#1 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AADWll9bSb13Maawu0eCP7PddCcwQCQVks5rfvP9gaJpZM4MLPEy>
.
|
I think I will close this for now as it is not something I intend to work on. It is likely to introduce a lot of complexity and what we have now is a fairly standard approach that seems pretty efficient. If anyone has any interesting ideas or benchmarks feel free to necro this issue. In the meantime, I suppose you could handle parallel uploads in your app by firing separate mutations at once, and awaiting them all to finish before doing a followup mutation. It would then be up to you to decide what happens if some fail or the operation is interrupted. |
I've been mulling this over for a while, and I thought I'd just write down my thoughts on this:
So in conclusion, this library is good to use now and is easily updatable to more complex requirements. I would love it if it added flow.js as a transport mechanism, falling back to fetch if not supported, since that would make the user experience much nicer. @jaydenseric One question though, how open are you to PRs that implement extra features, like switching to XHR to get progress support? |
I'm open to PRs, but will be picky. Keep in mind minified flow.js is 14.5 KB. How would progress work with Apollo client? The data loading state is just a boolean. I'm all ears if there is something clever we can do. These packages (client and server) expressedly exist to allow files to be uploaded to your GraphQL server. If you do not want to upload to your GraphQL server, but upload to Amazon, then there is no problem? Use their whatever their API is on the client, get the result and send it in a regular mutation as strings. If we were to make the upload method configurable, this line here is the place to start. The network interface would be setup something like this: import ApolloClient from 'apollo-client'
import {createNetworkInterface, amazonMethod as uploadMethod} from 'apollo-upload-client'
const client = new ApolloClient({
networkInterface: createNetworkInterface({
uri: '/graphql',
uploadMethod
})
}) async function amazonMethod (file) {
const {name, type, size, url} = await myAmazonAPI(file)
return {
name,
type,
size,
url
}
} But the current setup assumes a multipart form and the server specifically looks for files; it would need to be rewritten. Really, it should be a different packages because it's a different concept. Since I added batching support, the server expects 100% of requests to be multipart forms, so currently you cant hit the API from a client other than I'm this close to just base64 encoding files and sending them up as vanilla mutation variables. No engineering or config needed, just a 25% larger upload. |
On Tue, Mar 28, 2017 at 3:42 PM Jayden Seric ***@***.***> wrote:
I'm open to PRs, but will be very picky. Keep in mind minified flow.js is 14.5 KB.
I was looking at the wrong flow.js :( Still, not terrible, and can be
loaded on demand via webpack. Uploading a 400KB file takes much longer. So
this could be optional
How would progress work with Apollo client? The data loading state is just a boolean. I'm all ears if there is something clever we can do.
First of all, we must use XHR, not Fetch. Then, keep a singleton WeakMap per File object that maps to the upload progress event emitter. Export an `onProgress(File, cb)` that attaches cb to the event emitter for that File.
You can't augment the Promise that is returned by apollo-client since that is too brittle.
These packages (client and server) expressedly exists to allow files to be uploaded to your GraphQL server. If you do not want to upload to your GraphQL server, but upload to Amazon, then there is no problem? Use their whatever their API is on the client, get the result and send it in a regular mutation as strings.
Yes, but in the app, the API would be the same, just a different
configuration of the NetworkInterface…
If we were to make the upload method configurable, this line here <https://github.com/jaydenseric/apollo-upload-client/blob/v3.0.1/src/helpers.js#L40> is the place to start. file would be set via a function that takes the actual file and returns the required metadata. Should we standardize the allowed metadata, or just have people set the type on the server to match the method?
I think the metadata should have some fixed fields like name, size, date(s)? and then any others that are useful for the used upload method
But the current setup assumes a multipart form and the server specifically looks for files; it would need to be rewritten. Really, it should be a different packages because it's a different concept.
Well, that is the server side for form upload. Server side for AWS would
look different, it could have something to create pre-signed URLs, for
example.
Since I added batching support, the server expects 100% of requests to be multipart forms, so currently you cant hit the API from a client other than apollo-upload-client. This is unintentional and temporary, so when I get to server to support both regular and special multipart form file upload requests again this might be easier.
Well, I would be ok with it ignoring non-form POSTs, so graphiql would work normally. Batching vs non-batching interface is a free choice of the dev, I don't think it's a problem if you only support one of the two…
I'm *this* close to just base64 encoding files and sending them up as vanilla mutation variables. No engineering or config needed, just a 25% larger upload.
It's worse than that, it doesn't seem to compress well. I just tested against a 184MB XML file, the gzip size is 10MB and the base64-then-gzip size is 36MB. I think flowjs is a better option then.
|
I'm wondering if this project could transparently support parallel uploads? If you provide a
FileList
, it could perhaps send 3 at a time?Or would that break a single-form-post assumption on the server?
Also not sure if it's worth it, if the parallel uploading takes just as long as serial uploading.
The text was updated successfully, but these errors were encountered: