-
Notifications
You must be signed in to change notification settings - Fork 19.5k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
ImageGenerator for multiple inputs #3386
Comments
You need a generator that yields something of the form |
That is what I like to do, but I don't really know how to create one which will give proper results. One issue I see is for example related to shuffling. If I would use the original |
Ok, I made it work! For anybody asking himself the same question here is my example solution: def createGenerator( X, I, Y):
while True:
# suffled indices
idx = np.random.permutation( X.shape[0])
# create image generator
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=10, #180, # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0.1, #0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, #0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=False, # randomly flip images
vertical_flip=False) # randomly flip images
batches = datagen.flow( X[idx], Y[idx], batch_size=64, shuffle=False)
idx0 = 0
for batch in batches:
idx1 = idx0 + batch[0].shape[0]
yield [batch[0], I[ idx[ idx0:idx1 ] ]], batch[1]
idx0 = idx1
if idx1 >= X.shape[0]:
break |
Here's a piece of code that formats the outputs of two generators. It can be extended to any number of generators. Assuming the output of both generators is of the form (x,y) and the wanted output is of the form ([x1, x2], y1): def format_gen_outputs(gen1,gen2):
x1 = gen1[0]
x2 = gen2[0]
y1 = gen1[1]
return [x1, x2], y1
combo_gen = map(format_gen_outputs, gen1, gen2) |
@jagiella I have a similar structure but instead of one datagen.flow, I have three from three different sources. My problem is I want to make sure the same set of augmentations is applied to arrays of the same index across all three batches. Any ideas? I think the seed argument in datagen.flow is for shuffling only. |
I am using a lightly different variation... generator = ImageDataGenerator(rotation_range=90,
width_shift_range=0.05,
height_shift_range=0.05,
zoom_range=0.1)
def generate_data_generator_for_two_images(X1, X2, Y):
genX1 = generator.flow(X1,Y, seed=7)
genX2 = generator.flow(X2, seed=7)
while True:
X1i = genX1.next()
X2i = genX2 .next()
yield [X1i[0], X2i ], X1i[1] |
I get the following error when using the function below:
UPDATE Fixed with the following code:
|
I have a similar question: I want to use the triplet loss, so I need three image, two different ones from the same class, and the other different one from another class. Did anyone do the similar work? |
@jagiella I use this peice of code, however it shows the error message: ValueError: generator already executing |
@drorhilman and got this error: There are a lot of similar methods just like yours. I tried most of them and got similar error message. |
@jockes60 |
@fchollet
The following is my code:
|
@DNXie |
My system is win7 |
@FrancisYizhang My code above has |
@DNXie |
Could anyone of you guys kindly help me solve this problem #10499. I tried implementing the same generator as in this post but i dont seem to figure out where is my mistake. any help is very much appreciated. |
@ahmedhosny did you ever find a solution for applying the same transform to the images in the same index in the two different arrays? |
Hello guys, how are you? I have a doubt. I will train input sets on the same network, for example, model1 receives input X1 (three folders containing classes and each class has the training, validation, and test data) and model2 receives input X2 (three folders containing three classes and each class has the training, validation and test data). Then I will concatenate a convolution of model X1 with one of model X2. So I have two input data, two validation data and two test data. My question is about the following command: steps_per_epoch = nb_train_samples // batchsize. I would like to know if my nb_train_samples is the sum of only the training_class1_X1 + training_class2_X1 + training_class3_X1 or if nb_train_samples is the sum of (training_class1_X1 + training_class2_X1 + training_class3_X1) + (training_class1_X2 + training_class2_X2 + training_class3_X2). |
found it on the internet, dont remeber where....def two_image_generator(generator,
|
@TheStoneMX good ideia. Thank you very much. |
but what if your ImageGenerator adds augmented data and you need to match the right features for that image? How do I know which image belongs to which row of my dataframe? |
I have built a model which constists of two branches which are then merged into a single one. For the training of the model I would like to use the ImageGenerator to augement the image data, but don't know how to make work for the mixed input type. Does anybody have an idea how to deal with this in keras?
Any help would be highly appreciated!
Best,
Nick
MODEL
The first branchen takes images as inputs:
The second branch takes auxiliary data as input:
Then those get merged into the final model:
TRAINING / PROBLEM:
I tried to do the following which obviously failed:
This produces the following error message:
The text was updated successfully, but these errors were encountered: