Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Why is evaluation done on the training set #31

Open
fulcus opened this issue May 11, 2022 · 0 comments
Open

Why is evaluation done on the training set #31

fulcus opened this issue May 11, 2022 · 0 comments

Comments

@fulcus
Copy link

fulcus commented May 11, 2022

If I understand correctly, the model is evaluated on the same data that it's trained on. Doesn't this lead to a wrong evaluation?

Load data

x, y = load_data(args.dataset)

DEC-keras/datasets.py

Lines 94 to 103 in 2438070

def load_mnist():
# the data, shuffled and split between train and test sets
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x = np.concatenate((x_train, x_test))
y = np.concatenate((y_train, y_test))
x = x.reshape((x.shape[0], -1))
x = np.divide(x, 255.)
print('MNIST samples', x.shape)
return x, y

Evaluate

DEC-keras/DEC.py

Lines 333 to 335 in 2438070

y_pred = dec.fit(x, y=y, tol=args.tol, maxiter=args.maxiter, batch_size=args.batch_size,
update_interval=update_interval, save_dir=args.save_dir)
print('acc:', metrics.acc(y, y_pred))

Shouldn't x_train and y_trained used to pretrain and fit, and then x_test and y_test used to evaluate the model?

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant