Simple linear regression with Scikit-learn

·

1 min read

Yesterday, I implemented a simple linear regression algorithm with gradient descent, and today, I would like to try it with library.

from sklearn.linear_model import SGDRegressor
sgdr = SGDRegressor(max_iter=10000)
x_train = np.array([1.0, 2.0, 3.0])   #features
x_train = np.array([1.0, 2.0, 3.0]).reshape((-1, 1)) 
y_train = np.array([300.0, 500.0, 700.0])   #target value
sgdr.fit(x_train, y_train)
print(sgdr)
b = sgdr.intercept_
w = sgdr.coef_
print(f"model parameters:    w: {w}, b:{b}")

This w and b return pretty much the same values as the outcome of the algorithm that I created before.

This is just a single-feature linear algorithm, but presumably, you use more than two features. So this SGDRegressor accepts sets of features like this

[
[1, 2, 3]
[4, 5, 6]
].

Therefore, x_train is reshaped and became

[[1.]
 [2.]
 [3.]]

This .fit method does gradient descent with maximum iterations. Now sgdr is trained, and you can get sgdr.intercept_(b) and sgdr.coef_ (w)