You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# select the percentage of layers to be trained while using the transfer learning
# technique. The selected layers will be close to the output/final layers.
unfreeze_percentage = 0
learning_rate = 0.001
if training_format == "scratch":
print("Training a model from scratch")
model = scratch(train, val, learning_rate)
elif training_format == "transfer_learning":
print("Fine Tuning the MobileNet model")
model = transfer_learn(train, val, unfreeze_percentage, learning_rate)
i see messages like that: Corrupt JPEG data: 65 extraneous bytes before marker 0xd9
i googled that and the problem is with corrupted images in dataset (cats and dogs). to fix that one needs to use code to clean dataset:
import os
num_skipped = 0
for folder_name in ("Cat", "Dog"):
folder_path = os.path.join("PetImages", folder_name)
for fname in os.listdir(folder_path):
fpath = os.path.join(folder_path, fname)
try:
fobj = open(fpath, "rb")
is_jfif = tf.compat.as_bytes("JFIF") in fobj.peek(10)
finally:
fobj.close()
if not is_jfif:
num_skipped += 1
# Delete corrupted image
os.remove(fpath)
print("Deleted %d images" % num_skipped)
- removing deprecated files
- the gcloud sessions are not using the same runtime (so we need to re-download data) so writing outputs to gdrive to make sure things are running and outputs are accessible for subsequent notebooks in colab
- renaming 4/1 notebook - since it had underscores instead of hyphens, the colab link in the notebook was not working
- xception added to imports for notebook 1, since it is part of the model_maker
- PracticalDL#163 - fixed
- PracticalDL#164 - fixed
- PracticalDL#169 - fixed
- metric='angular' added for annoy as default arg will be removed in subsequent releases
- removing a duplicate PCA + Annoy section
- PracticalDL#170 - fixed
- time is a negligible factor here, and we do not need it in the plots (since we are using optimised accuracy calculation using numpy from issue 170) - hence, modifying the plots
- removing matplotlib.style.use('seaborn') since it is deprecated
- the final fine-tuning notebook uses Caltech256 features (as per the book), which do not exist, since fine-tuning was done on Caltech101 - hence, renaming those files to caltech101. Can we retain caltech101 to test?
- PracticalDL#167 - fixed, if the above is okay
- formatted the code
chapter 5:
- write_grads and batch_size params have been removed from callback, or will be removed in subsequent releases
- PracticalDL#174 - not able to replicate this issue
- added a pointer to the notebook that suggests that for tensorboard to work without a 403 Forbidden error on Colab, cookies need to be allowed (I faced this issue)
- notebook 3 in chapter 5 is the exact same as notebook 2 in chapter 2 - replaced the file directly
- the autokeras notebook in Colab is named autokeras-error.ipynb - where can we change this to autokeras.ipynb?
- fixing accuracy score calculation in the autokeras notebook
- formatted the code
chapter 6:
- including the download_sample_image function
- formatted the code
when train model (transfer learning)
i see messages like that:
Corrupt JPEG data: 65 extraneous bytes before marker 0xd9
i googled that and the problem is with corrupted images in dataset (cats and dogs). to fix that one needs to use code to clean dataset:
before training.
see also - https://discuss.tensorflow.org/t/first-steps-in-keras-error/8049/11
but i have no idea how to clean it if i have dataset presented as tfrecords:
how you fix that and if is not will the model will be correct ?
The text was updated successfully, but these errors were encountered: