Loss is used to calculate the gradient in back-propagation. We should keep the actual value produced by the model and use it to calculate the loss.
On the other hand, accuracy is used by human to judge on how good the model performs on certain dataset. In this case, we need to round the actual value to the expected range, e.g. expected value either [0, 1].
Tuesday, 19 November 2019
Wednesday, 13 November 2019
how to run tensorboard?
This can then be visualized with TensorBoard, which should be installable
and runnable with:
pip install tensorboard
tensorboard --logdir=runs
For training and testing loss, which should be installable
and runnable with:from torch.utils.tensorboard import SummaryWriter
import numpy as np
writer = SummaryWriter()
for n_iter in range(100):
writer.add_scalar('Loss/train', np.random.random(), n_iter)
writer.add_scalar('Loss/test', np.random.random(), n_iter)
writer.add_scalar('Accuracy/train', np.random.random(), n_iter)
writer.add_scalar('Accuracy/test', np.random.random(), n_iter)
writer.close()
In order to merge multiple loss into one graph, which should be installable
and runnable with:from torch.utils.tensorboard import SummaryWriter
import numpy as np
writer = SummaryWriter()
for n_iter in range(10000):
writer.add_scalars('data/scalar_group', {'loss': n_iter*np.arctan(n_iter)}, n_iter)
if n_iter%1000==0:
writer.add_scalars('data/scalar_group', {'top1': n_iter*np.sin(n_iter)}, n_iter)
writer.add_scalars('data/scalar_group', {'top5': n_iter*np.cos(n_iter)}, n_iter)
writer.close()
python print actual number instead of scientific notations (e)
import numpy as np
np.set_printoptions(suppress=True)
np.set_printoptions(suppress=True)
Subscribe to:
Posts (Atom)