Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Training error in multi-animal top-down-model #2004

Open
pinjuu opened this issue Oct 23, 2024 · 4 comments
Open

Training error in multi-animal top-down-model #2004

pinjuu opened this issue Oct 23, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@pinjuu
Copy link

pinjuu commented Oct 23, 2024

Bug description

When I try to train the model this error occurs:

File "C:\Users\spike.conda\envs\sleap\lib\site-packages\tensorflow\python\util\traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\spike.conda\envs\sleap\lib\site-packages\tensorflow\python\framework\func_graph.py", line 1129, in autograph_handler
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:

File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\keras\engine\training.py", line 1621, in predict_function  *
    return step_function(self, iterator)
File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\keras\engine\training.py", line 1611, in step_function  **
    outputs = model.distribute_strategy.run(run_step, args=(data,))
File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\keras\engine\training.py", line 1604, in run_step  **
    outputs = model.predict_step(data)
File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\keras\engine\training.py", line 1572, in predict_step
    return self(x, training=False)
File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None

TypeError: Exception encountered when calling layer "top_down_multi_class_inference_model" (type TopDownMultiClassInferenceModel).

in user code:

    File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\sleap\nn\inference.py", line 4102, in call  *
        crop_output = self.centroid_crop(example)
    File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler  **
        raise e.with_traceback(filtered_tb) from None

    TypeError: Exception encountered when calling layer "centroid_crop_ground_truth" (type CentroidCropGroundTruth).

    in user code:

        File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\sleap\nn\inference.py", line 772, in call  *
            crops = sleap.nn.peak_finding.crop_bboxes(full_imgs, bboxes, crop_sample_inds)
        File "C:\Users\spike\.conda\envs\sleap\lib\site-packages\sleap\nn\peak_finding.py", line 173, in crop_bboxes  *
            image_height = tf.shape(images)[1]

        TypeError: Failed to convert elements of tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64)) to Tensor. Consider casting elements to a supported type. See https://www.tensorflow.org/api_docs/python/tf/dtypes for supported TF dtypes.


    Call arguments received:
      • example_gt={'image': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'raw_image_size': 'tf.Tensor(shape=(4, 3), dtype=int32)', 'example_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'video_ind': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'frame_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'scale': 'tf.Tensor(shape=(4, 2), dtype=float32)', 'instances': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:2", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'skeleton_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'track_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'n_tracks': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'centroids': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))'}


Call arguments received:
  • example={'image': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:2", shape=(None, 1), dtype=uint8), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_1/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'raw_image_size': 'tf.Tensor(shape=(4, 3), dtype=int32)', 'example_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'video_ind': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'frame_ind': 'tf.Tensor(shape=(4, 1), dtype=int64)', 'scale': 'tf.Tensor(shape=(4, 2), dtype=float32)', 'instances': 'tf.RaggedTensor(values=tf.RaggedTensor(values=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:2", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:1", shape=(None,), dtype=int64)), row_splits=Tensor("RaggedFromVariant_2/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'skeleton_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_3/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'track_inds': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:1", shape=(None,), dtype=int32), row_splits=Tensor("RaggedFromVariant_4/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))', 'n_tracks': 'tf.Tensor(shape=(4, 1), dtype=int32)', 'centroids': 'tf.RaggedTensor(values=Tensor("RaggedFromVariant/RaggedTensorFromVariant:1", shape=(None, 2), dtype=float32), row_splits=Tensor("RaggedFromVariant/RaggedTensorFromVariant:0", shape=(5,), dtype=int64))'}

Expected behaviour

Succesfuly train the model

Actual behaviour

Training does not complete due to the error

Your personal set up

SLEAP v1.3.3

Environment packages
# paste output of `pip freeze` or `conda list` here
Logs
# paste relevant logs here, if any

Screenshots

How to reproduce

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error
@pinjuu pinjuu added the bug Something isn't working label Oct 23, 2024
@eberrigan
Copy link
Contributor

Hi @pinjuu,

I will just need some more information from you.

How did you install SLEAP?

Please provide the command you are using to get this error.

Thanks!

Elizabeth

@pinjuu
Copy link
Author

pinjuu commented Oct 24, 2024

Installation was conda package

{
"_pipeline": "multi-animal top-down-id",
"_ensure_channels": "",
"outputs.run_name_prefix": "LBNcohort1_SIBody231024",
"outputs.runs_folder": "C:/Users/spike/Desktop/sleap/SLEAP Projects\models",
"outputs.tags": "",
"outputs.checkpointing.best_model": true,
"outputs.checkpointing.latest_model": false,
"outputs.checkpointing.final_model": false,
"outputs.tensorboard.write_logs": false,
"_save_viz": true,
"_predict_frames": "suggested frames (1539 total frames)",
"model.heads.centroid.sigma": 2.75,
"model.heads.multi_class_topdown.confmaps.anchor_part": null,
"model.heads.multi_class_topdown.confmaps.sigma": 5.0,
"model.heads.centroid.anchor_part": null,
"model.heads.centered_instance.anchor_part": null,
"data.instance_cropping.center_on_part": null
}
{
"data": {
"labels": {
"training_labels": "C:/Users/spike/Desktop/sleap/SLEAP Projects/SI_Cohort1_body.slp",
"validation_labels": null,
"validation_fraction": 0.1,
"test_labels": null,
"split_by_inds": false,
"training_inds": [
613,
555,
277,
243,
476,
176,
386,
524,
506,
351,
553,
219,
462,
436,
70,
425,
547,
265,
504,
138,
264,
153,
597,
191,
438,
434,
416,
155,
24,
647,
37,
580,
530,
402,
193,
391,
593,
286,
376,
375,
497,
454,
563,
325,
141,
624,
632,
43,
47,
465,
261,
457,
560,
110,
441,
579,
214,
196,
312,
105,
229,
446,
385,
466,
189,
573,
633,
337,
308,
165,
182,
213,
612,
634,
269,
297,
89,
498,
73,
594,
107,
522,
493,
329,
100,
326,
185,
34,
589,
523,
420,
353,
111,
152,
513,
311,
417,
543,
114,
574,
617,
419,
267,
203,
564,
590,
568,
144,
382,
246,
290,
575,
600,
480,
35,
266,
461,
54,
208,
215,
147,
81,
183,
303,
448,
501,
640,
588,
406,
171,
562,
96,
10,
260,
108,
190,
328,
474,
603,
528,
399,
550,
137,
82,
366,
488,
160,
378,
32,
230,
510,
552,
120,
17,
322,
502,
161,
313,
398,
646,
63,
332,
595,
551,
320,
451,
278,
516,
75,
534,
44,
350,
41,
292,
607,
452,
86,
217,
405,
50,
103,
291,
489,
42,
336,
317,
340,
578,
80,
245,
442,
599,
372,
360,
540,
380,
115,
459,
126,
26,
358,
389,
252,
604,
381,
427,
301,
85,
495,
307,
636,
61,
247,
468,
0,
439,
512,
431,
587,
242,
486,
565,
5,
538,
169,
496,
157,
638,
293,
403,
135,
197,
251,
521,
377,
621,
228,
629,
148,
637,
94,
503,
455,
585,
542,
124,
117,
11,
428,
271,
287,
131,
156,
78,
544,
341,
45,
401,
72,
56,
482,
66,
370,
361,
300,
275,
440,
306,
248,
626,
392,
235,
469,
334,
608,
253,
475,
122,
525,
145,
30,
413,
234,
159,
545,
333,
59,
279,
412,
635,
280,
233,
184,
396,
374,
28,
324,
91,
487,
226,
150,
511,
58,
40,
255,
395,
345,
133,
109,
500,
338,
355,
281,
388,
354,
598,
136,
289,
616,
139,
357,
384,
299,
285,
426,
463,
433,
201,
223,
12,
532,
140,
514,
163,
102,
218,
211,
92,
620,
49,
499,
227,
195,
21,
481,
359,
539,
9,
186,
373,
128,
142,
3,
270,
421,
554,
52,
134,
435,
397,
576,
212,
164,
648,
273,
470,
304,
238,
173,
149,
494,
118,
364,
172,
288,
478,
394,
318,
437,
168,
549,
236,
33,
400,
210,
335,
611,
644,
60,
453,
445,
609,
529,
87,
298,
343,
422,
84,
483,
64,
414,
321,
69,
309,
315,
154,
200,
449,
348,
586,
123,
369,
561,
390,
2,
231,
256,
302,
491,
569,
1,
363,
257,
55,
249,
619,
254,
19,
232,
98,
410,
119,
127,
650,
519,
46,
533,
198,
23,
331,
258,
591,
566,
371,
216,
367,
125,
472,
379,
162,
222,
346,
53,
368,
505,
27,
464,
408,
113,
51,
4,
430,
627,
13,
387,
583,
22,
146,
596,
31,
316,
263,
404,
365,
18,
225,
537,
88,
179,
330,
7,
68,
170,
415,
546,
132,
268,
194,
79,
25,
456,
526,
129,
202,
175,
178,
106,
305,
205,
14,
548,
282,
557,
71,
606,
116,
577,
90,
29,
67,
536,
262,
344,
460,
424,
477,
610,
559,
166,
250,
167,
535,
485,
582,
187,
630,
641,
206,
584,
181,
121,
342,
104,
622,
484,
432,
48,
93,
8,
615,
339,
407,
444,
447,
272,
319,
347,
276,
507,
239,
174,
531,
520,
158,
349,
411,
643,
284,
74,
443,
418,
101,
221,
57,
259,
143,
450,
65,
623,
556,
509,
237,
207,
625,
605,
83,
572,
515,
130,
356,
490,
6,
645,
151,
492,
112
],
"validation_inds": [
296,
62,
558,
180,
508,
244,
649,
383,
628,
473,
479,
467,
15,
527,
631,
517,
95,
352,
97,
541,
220,
362,
16,
294,
274,
639,
458,
471,
518,
423,
177,
295,
567,
310,
76,
77,
283,
571,
592,
602,
614,
209,
20,
99,
323,
570,
199,
39,
240,
642,
327,
241,
314,
224,
204,
188,
192,
36,
393,
581,
38,
601,
429,
618,
409
],
"test_inds": null,
"search_path_hints": [
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
"",
""
],
"skeletons": []
},
"preprocessing": {
"ensure_rgb": false,
"ensure_grayscale": false,
"imagenet_mode": null,
"input_scaling": 0.5,
"pad_to_stride": 16,
"resize_and_pad_to_target": true,
"target_height": 1024,
"target_width": 1280
},
"instance_cropping": {
"center_on_part": null,
"crop_size": null,
"crop_size_detection_padding": 16
}
},
"model": {
"backbone": {
"leap": null,
"unet": {
"stem_stride": null,
"max_stride": 16,
"output_stride": 2,
"filters": 16,
"filters_rate": 2.0,
"middle_block": true,
"up_interpolate": true,
"stacks": 1
},
"hourglass": null,
"resnet": null,
"pretrained_encoder": null
},
"heads": {
"single_instance": null,
"centroid": {
"anchor_part": null,
"sigma": 2.75,
"output_stride": 2,
"loss_weight": 1.0,
"offset_refinement": false
},
"centered_instance": null,
"multi_instance": null,
"multi_class_bottomup": null,
"multi_class_topdown": null
},
"base_checkpoint": null
},
"optimization": {
"preload_data": true,
"augmentation_config": {
"rotate": true,
"rotation_min_angle": -15.0,
"rotation_max_angle": 15.0,
"translate": false,
"translate_min": -5,
"translate_max": 5,
"scale": false,
"scale_min": 0.9,
"scale_max": 1.1,
"uniform_noise": false,
"uniform_noise_min_val": 0.0,
"uniform_noise_max_val": 10.0,
"gaussian_noise": false,
"gaussian_noise_mean": 5.0,
"gaussian_noise_stddev": 1.0,
"contrast": false,
"contrast_min_gamma": 0.5,
"contrast_max_gamma": 2.0,
"brightness": false,
"brightness_min_val": 0.0,
"brightness_max_val": 10.0,
"random_crop": false,
"random_crop_height": 256,
"random_crop_width": 256,
"random_flip": false,
"flip_horizontal": false
},
"online_shuffling": true,
"shuffle_buffer_size": 128,
"prefetch": true,
"batch_size": 4,
"batches_per_epoch": 200,
"min_batches_per_epoch": 200,
"val_batches_per_epoch": 10,
"min_val_batches_per_epoch": 10,
"epochs": 200,
"optimizer": "adam",
"initial_learning_rate": 0.0001,
"learning_rate_schedule": {
"reduce_on_plateau": true,
"reduction_factor": 0.5,
"plateau_min_delta": 1e-06,
"plateau_patience": 5,
"plateau_cooldown": 3,
"min_learning_rate": 1e-08
},
"hard_keypoint_mining": {
"online_mining": false,
"hard_to_easy_ratio": 2.0,
"min_hard_keypoints": 2,
"max_hard_keypoints": null,
"loss_scale": 5.0
},
"early_stopping": {
"stop_training_on_plateau": true,
"plateau_min_delta": 1e-08,
"plateau_patience": 20
}
},
"outputs": {
"save_outputs": true,
"run_name": null,
"run_name_prefix": "LBNcohort1_SIBody231024",
"run_name_suffix": null,
"runs_folder": "C:/Users/spike/Desktop/sleap/SLEAP Projects\models",
"tags": [
""
],
"save_visualizations": true,
"delete_viz_images": true,
"zip_outputs": false,
"log_to_csv": true,
"checkpointing": {
"initial_model": false,
"best_model": true,
"every_epoch": false,
"latest_model": false,
"final_model": false
},
"tensorboard": {
"write_logs": false,
"loss_frequency": "epoch",
"architecture_graph": false,
"profile_graph": false,
"visualizations": true
},
"zmq": {
"subscribe_to_controller": true,
"controller_address": "tcp://127.0.0.1:9000",
"controller_polling_timeout": 10,
"publish_updates": true,
"publish_address": "tcp://127.0.0.1:9001"
}
},
"name": "",
"description": "",
"sleap_version": "1.3.3",
"filename": "C:/Users/spike/Desktop/sleap/SLEAP Projects\models\LBNcohort1_SIBody231024241023_102341.centroid.n=651\training_config.json"
}
{
"data": {
"labels": {
"training_labels": "C:/Users/spike/Desktop/sleap/SLEAP Projects/SI_Cohort1_body.slp",
"validation_labels": null,
"validation_fraction": 0.1,
"test_labels": null,
"split_by_inds": false,
"training_inds": [
277,
530,
143,
611,
40,
22,
402,
616,
174,
561,
448,
610,
154,
475,
558,
123,
191,
41,
384,
458,
483,
605,
603,
241,
393,
120,
540,
127,
15,
200,
296,
107,
7,
460,
579,
318,
299,
620,
434,
205,
85,
553,
437,
479,
17,
308,
578,
351,
0,
383,
614,
359,
365,
298,
188,
26,
79,
340,
428,
638,
629,
566,
259,
271,
484,
101,
604,
342,
99,
348,
454,
76,
494,
622,
240,
124,
266,
375,
122,
426,
417,
237,
503,
396,
404,
90,
456,
278,
592,
68,
439,
111,
606,
432,
341,
443,
25,
78,
134,
353,
369,
269,
19,
335,
261,
419,
198,
50,
210,
31,
500,
66,
495,
641,
546,
95,
158,
190,
398,
575,
110,
183,
464,
131,
491,
164,
465,
3,
223,
118,
229,
326,
583,
80,
630,
368,
70,
468,
534,
355,
486,
619,
82,
45,
455,
425,
272,
221,
297,
273,
378,
562,
317,
168,
30,
146,
481,
42,
502,
305,
309,
270,
279,
559,
627,
421,
142,
574,
24,
5,
598,
422,
560,
399,
441,
292,
488,
524,
51,
33,
108,
388,
331,
112,
322,
81,
387,
236,
544,
337,
370,
635,
408,
93,
516,
643,
265,
162,
527,
452,
374,
49,
557,
71,
300,
173,
333,
515,
104,
376,
531,
642,
354,
328,
125,
29,
117,
185,
98,
438,
382,
323,
344,
430,
521,
231,
60,
645,
20,
412,
590,
601,
16,
433,
492,
295,
47,
514,
523,
389,
394,
114,
522,
607,
310,
46,
361,
232,
38,
596,
207,
571,
325,
429,
136,
91,
130,
222,
147,
570,
256,
589,
406,
424,
528,
116,
519,
386,
233,
94,
303,
330,
445,
304,
56,
197,
257,
588,
226,
497,
217,
477,
166,
377,
364,
52,
247,
61,
413,
149,
213,
637,
409,
595,
246,
526,
459,
280,
631,
11,
227,
268,
252,
58,
547,
293,
283,
238,
473,
103,
532,
255,
62,
446,
249,
613,
284,
501,
23,
102,
92,
542,
13,
264,
201,
332,
225,
487,
379,
397,
513,
319,
196,
182,
506,
9,
517,
324,
628,
362,
618,
74,
195,
115,
1,
181,
416,
372,
573,
245,
427,
577,
212,
161,
133,
97,
474,
113,
235,
54,
469,
75,
401,
194,
48,
202,
59,
466,
151,
155,
77,
106,
624,
567,
137,
211,
489,
504,
621,
639,
53,
518,
320,
435,
286,
444,
86,
644,
580,
507,
14,
485,
418,
156,
529,
634,
420,
391,
461,
623,
548,
291,
204,
496,
132,
334,
586,
67,
597,
253,
536,
537,
228,
626,
552,
554,
403,
138,
414,
447,
538,
367,
572,
541,
43,
357,
153,
288,
533,
636,
525,
214,
34,
224,
327,
172,
215,
239,
129,
163,
216,
440,
289,
505,
199,
462,
21,
345,
258,
177,
87,
581,
490,
478,
593,
600,
275,
187,
178,
511,
4,
350,
139,
363,
148,
184,
356,
358,
165,
339,
220,
608,
244,
290,
285,
192,
450,
555,
294,
380,
539,
463,
311,
72,
564,
169,
591,
405,
12,
203,
6,
248,
321,
615,
498,
556,
100,
39,
234,
315,
313,
69,
576,
316,
119,
267,
159,
302,
410,
65,
274,
44,
457,
36,
371,
171,
551,
28,
276,
175,
392,
451,
63,
27,
336,
263,
219,
602,
105,
145,
480,
453,
352,
150,
640,
520,
329,
535,
390,
415,
360,
8,
633,
170,
301,
73,
314,
423,
160,
32,
543,
312,
64,
400,
509,
167,
550,
55,
57,
470,
609,
625,
347,
84,
582,
189,
508,
569,
135,
385,
287,
126,
37,
510,
208,
476,
281,
96,
218,
617,
411
],
"validation_inds": [
493,
243,
338,
632,
482,
262,
89,
141,
346,
193,
83,
584,
128,
140,
349,
250,
18,
467,
35,
585,
563,
2,
449,
565,
251,
612,
179,
254,
366,
260,
176,
282,
144,
186,
499,
568,
594,
157,
599,
471,
472,
436,
242,
587,
306,
549,
431,
121,
545,
373,
209,
230,
442,
10,
395,
180,
206,
381,
152,
512,
343,
109,
407,
307,
88
],
"test_inds": null,
"search_path_hints": [
"",
"",
"",
"",
"",
"",
""
],
"skeletons": []
},
"preprocessing": {
"ensure_rgb": false,
"ensure_grayscale": false,
"imagenet_mode": null,
"input_scaling": 1.0,
"pad_to_stride": 16,
"resize_and_pad_to_target": true,
"target_height": 1080,
"target_width": 1080
},
"instance_cropping": {
"center_on_part": null,
"crop_size": 272,
"crop_size_detection_padding": 16
}
},
"model": {
"backbone": {
"leap": null,
"unet": {
"stem_stride": null,
"max_stride": 16,
"output_stride": 2,
"filters": 64,
"filters_rate": 2.0,
"middle_block": true,
"up_interpolate": false,
"stacks": 1
},
"hourglass": null,
"resnet": null,
"pretrained_encoder": null
},
"heads": {
"single_instance": null,
"centroid": null,
"centered_instance": null,
"multi_instance": null,
"multi_class_bottomup": null,
"multi_class_topdown": {
"confmaps": {
"anchor_part": null,
"part_names": [
"nose1",
"neck1",
"earL1",
"earR1",
"forelegL1",
"forelegR1",
"tailstart1",
"hindlegL1",
"hindlegR1",
"tail1",
"tailend1"
],
"sigma": 5.0,
"output_stride": 2,
"loss_weight": 1.0,
"offset_refinement": false
},
"class_vectors": {
"classes": [
"1",
"2"
],
"num_fc_layers": 3,
"num_fc_units": 64,
"global_pool": true,
"output_stride": 16,
"loss_weight": 1.0
}
}
},
"base_checkpoint": null
},
"optimization": {
"preload_data": true,
"augmentation_config": {
"rotate": false,
"rotation_min_angle": -180.0,
"rotation_max_angle": 180.0,
"translate": false,
"translate_min": -5,
"translate_max": 5,
"scale": false,
"scale_min": 0.9,
"scale_max": 1.1,
"uniform_noise": false,
"uniform_noise_min_val": 0.0,
"uniform_noise_max_val": 10.0,
"gaussian_noise": false,
"gaussian_noise_mean": 5.0,
"gaussian_noise_stddev": 1.0,
"contrast": false,
"contrast_min_gamma": 0.5,
"contrast_max_gamma": 2.0,
"brightness": false,
"brightness_min_val": 0.0,
"brightness_max_val": 10.0,
"random_crop": false,
"random_crop_height": 256,
"random_crop_width": 256,
"random_flip": false,
"flip_horizontal": false
},
"online_shuffling": true,
"shuffle_buffer_size": 128,
"prefetch": true,
"batch_size": 8,
"batches_per_epoch": 200,
"min_batches_per_epoch": 200,
"val_batches_per_epoch": 10,
"min_val_batches_per_epoch": 10,
"epochs": 100,
"optimizer": "adam",
"initial_learning_rate": 0.0001,
"learning_rate_schedule": {
"reduce_on_plateau": true,
"reduction_factor": 0.5,
"plateau_min_delta": 1e-06,
"plateau_patience": 5,
"plateau_cooldown": 3,
"min_learning_rate": 1e-08
},
"hard_keypoint_mining": {
"online_mining": false,
"hard_to_easy_ratio": 2.0,
"min_hard_keypoints": 2,
"max_hard_keypoints": null,
"loss_scale": 5.0
},
"early_stopping": {
"stop_training_on_plateau": true,
"plateau_min_delta": 1e-06,
"plateau_patience": 10
}
},
"outputs": {
"save_outputs": true,
"run_name": null,
"run_name_prefix": "LBNcohort1_SIBody231024",
"run_name_suffix": null,
"runs_folder": "C:/Users/spike/Desktop/sleap/SLEAP Projects\models",
"tags": [
""
],
"save_visualizations": true,
"delete_viz_images": true,
"zip_outputs": false,
"log_to_csv": true,
"checkpointing": {
"initial_model": false,
"best_model": true,
"every_epoch": false,
"latest_model": false,
"final_model": false
},
"tensorboard": {
"write_logs": false,
"loss_frequency": "epoch",
"architecture_graph": false,
"profile_graph": false,
"visualizations": true
},
"zmq": {
"subscribe_to_controller": true,
"controller_address": "tcp://127.0.0.1:9000",
"controller_polling_timeout": 10,
"publish_updates": true,
"publish_address": "tcp://127.0.0.1:9001"
}
},
"name": "",
"description": "",
"sleap_version": "1.3.3",
"filename": "C:/Users/spike/Desktop/sleap/SLEAP Projects\models\LBNcohort1_SIBody231024241023_111429.multi_class_topdown.n=651\training_config.json"
}

@eberrigan
Copy link
Contributor

It looks like your skeletons is an empty list. Are you able to open this project in the GUI and take a peek at the skeleton and labels?

@pinjuu
Copy link
Author

pinjuu commented Oct 25, 2024

Image

I have labeled 651 frames in the project. Also I have trained this model before and it worked.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants