The reason why regularisation is used for regression model
Checking the performance of a model using a cross-validation set (Jcv) and a test set (Jtest) is a useful way to diagnose overfitting and evaluate the generalization capability of a Machine Learning model. However, solely relying on these performance metrics might not be enough to prevent overfitting from happening. Therefore, regularisation is essential.
If you do regularisation, you are likely to use λ, which is called the regularization parameter. When λ is too small, weight is not penalized, and the cost of the test set (Jtest) is low because of overfitting. When λ is too big, weight is penalized too much, and both a cross-validation set (Jcv) and a test set (Jtest) result in bad.
You can see a similar thing in the polynomial function. When the degree of the polynomial is too low, the model under fits the data, but when it is too high, the model overfits the data.
Overall, the regularisation process should be adopted and used with performance measures like Jtest. Also, the degrees of polynomial and regularisation parameters should be decided accordingly since both of them affect the performance of ML models.