-
Notifications
You must be signed in to change notification settings - Fork 62
not-edge loss value is too small, edge loss is nan/inf #14
Comments
okay, update something about the Nan/Inf problem: in losses.affinity_loss function
the output 'edge' will be a completely zero-matrix. That means no effective value left. So the final edge_loss will meet problems. I will continue debugging, hope could help someone. |
I think the problem is just caused by: So I think 'edge' and 'not_ignore' should be checked well. However, I still don't know whether it's a common problem, maybe it's related to the dataset itself. How do you think about it? @twke18 |
I was looking how Is it intentional? Could this be the source of the zero-matrix issue when |
Have you guys obtained improved results using affinity field loss? I tried many times, but I can hardly get improved results over my baseline. I also have the same issue as yours and I just do not take the term which is Nan into account when computing the total loss. |
Hi, I tried your affinity loss (not adaptive) as my loss function, my network is DeeplabV3+, MobileNet, My own dataset. I set margin=3.0, lambda1=1.0, lambda2=1.0
But there is something wrong with the loss, the not-edge loss is really small and not converge.
Here is a part of nor-edge loss value during training:
Mean Aff Loss is:[6.15826357e-05] Mean Aff Loss is:[7.15486458e-05] Mean Aff Loss is:[4.56848611e-05] Mean Aff Loss is:[5.51421945e-05] Mean Aff Loss is:[7.94407606e-05] Mean Aff Loss is:[0.000143873782] Mean Aff Loss is:[6.04316447e-05] Mean Aff Loss is:[9.94381699e-05] Mean Aff Loss is:[0.000107184518] Mean Aff Loss is:[6.87552383e-05] Mean Aff Loss is:[7.98113e-05] Mean Aff Loss is:[0.000122067388] Mean Aff Loss is:[5.42108719e-05]
As for edge loss value, it will alert Nan or Inf in the beginning. It troubles me so much :(
Could anyone give some advice?
The text was updated successfully, but these errors were encountered: