Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

NaN on experiment for nmist data. #3

Open
rica01 opened this issue Dec 30, 2021 · 0 comments
Open

NaN on experiment for nmist data. #3

rica01 opened this issue Dec 30, 2021 · 0 comments

Comments

@rica01
Copy link

rica01 commented Dec 30, 2021

Hello.
I am trying to use your SET MLP implementation in a bunch of data from NMIST. I load the data with the following function I wrote:

def load_fashion_mnist_data(no_training_samples, no_testing_samples, filepath):

    data = np.load(filepath)

    x = data['Y_train']
    index_train = np.arange(data["X_train"].shape[0])
    np.random.shuffle(index_train)

    index_test = np.arange(data["X_test"].shape[0])
    np.random.shuffle(index_test)

    x_train = data["X_train"][index_train[0:no_training_samples], :]
    y_train = data["Y_train"][index_train[0:no_training_samples], :]
    x_test = data["X_test"][index_test[0:no_testing_samples], :]
    y_test = data["Y_test"][index_test[0:no_testing_samples], :]

    # normalize in 0..1
    x_train = x_train / 255.
    x_test = x_test / 255.

    return x_train.astype('float64'), y_train.astype('float64'), x_test.astype('float64'), y_test.astype('float64')

After a couple of epochs I come up with this error in the operation:
self.pdw[index] = self.momentum * self.pdw[index] - self.learning_rate * dw

Full traceback:

Traceback (most recent call last):
  File "C:\Users\rroman\projects\Py\old\set_mlp (2).py", line 585, in <module>
    metrics = set_mlp.fit(x_train, y_train, x_test, y_test, loss=CrossEntropy, epochs=no_training_epochs, batch_size=batch_size, learning_rate=learning_rate,
  File "C:\Users\rroman\projects\Py\old\set_mlp (2).py", line 309, in fit
    self._back_prop(z, a, masks,  y_[k:l])
  File "C:\Users\rroman\projects\Py\old\set_mlp (2).py", line 242, in _back_prop
    self._update_w_b(k, v[0], v[1])
  File "C:\Users\rroman\projects\Py\old\set_mlp (2).py", line 258, in _update_w_b
    self.pdw[index] = self.momentum * self.pdw[index] - self.learning_rate * dw
  File "C:\Users\rroman\AppData\Roaming\Python\Python39\site-packages\scipy\sparse\base.py", line 543, in __rmul__   
    return self.__mul__(other)
  File "C:\Users\rroman\AppData\Roaming\Python\Python39\site-packages\scipy\sparse\base.py", line 475, in __mul__    
    return self._mul_scalar(other)
  File "C:\Users\rroman\AppData\Roaming\Python\Python39\site-packages\scipy\sparse\data.py", line 124, in _mul_scalar
    return self._with_data(self.data * other)
FloatingPointError: underflow encountered in multiply

I am guessing that there's a number in one of the arrays that gets smaller and smaller and smaller. Is there somebody around that could give me a hand to prevent this from happening?

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant