There can be several reasons for the increasing loss in each epoch during simple linear regression in PyTorch. Some of them are:
Learning rate: If the learning rate is too high, the model may overshoot the optimal solution and continue to diverge. Therefore, it is important to fine-tune the learning rate.
Poor initialization of weights: The weights of the model must be initialized correctly to avoid the model getting stuck in a local minimum.
Data normalization: If the data is not normalized before training, the model may have difficulty converging.
Model complexity: A model that is too complex may also lead to an increasing loss as it may overfit the training data.
Insufficient training: The model might not have been trained for a sufficient number of epochs, and therefore not reached the minimum loss.
Outliers: If there are outliers in the data, it can cause the model to have an increasing loss as it tries to fit to these outliers.
Please start posting anonymously - your entry will be published after you log in or create a new account. This space is reserved only for answers. If you would like to engage in a discussion, please instead post a comment under the question or an answer that you would like to discuss
Asked: 2021-04-24 11:00:00 +0000
Seen: 7 times
Last updated: Oct 20 '21
How can one ensure that sub-classes have uniform method parameters in TypeScript?
How can the calculation of matrix determinant be performed using CUDA?
How can code repetition be prevented when using (box)plot functions?
When I attempt to generate a database, why does the azure-cosmos-emulator become unresponsive?
What steps can I take to prevent my webpage from slowing down when all parts of a div are displayed?