How to implement logistic regression with scikit-learn
Implementing logistic regression is pretty bothersome. However, if you use scikit-learn, the process is so much easier.
import numpy as np
from sklearn.linear_model import LogisticRegression
X = np.array([[1.5, 2.5], [0,1.5], [0.5, 0.5], [1.5, 0.5], [2, 2], [2, 2.5]])
y = np.array([0, 0, 0, 1, 1, 1])
lr_model = LogisticRegression()
lr_model.fit(X, y)
y_pred = lr_model.predict(X)
print("Prediction on training set:", y_pred)
When fit()
is called on a logistic regression object, it uses an optimization algorithm (usually gradient descent) to find the values of the model parameters (i.e., the coefficients and intercept) that minimize the logistic loss function. The optimization algorithm iteratively updates the parameter values based on the difference between the predicted probabilities and the actual target variable values, until it reaches a minimum of the loss function.
(sigmoid function with f(x)=w1*x1+w2*x2+..., logarithmic loss function are used in there.)
After the fit()
method has been called, the logistic regression model is trained and ready to make predictions on new, unseen data. The trained model can be used to predict the target variable for new input features using the predict()
method.