Train Linear Regression with the direct method.

Hello, Today I am going to share with you the basics of linear regression. For this, I haven’t planned to use Gradient Descent as I want the model to be a little fast and quick.

So, I am going to apply Closed-Form(Normal Equation) on Linear Regression with RMSE to get θ (model’s parameter vector). we will see this as we go on.

Linear regression

Ok let’s start with Linear regression

Linear regression

θ: model’s parameter vector, containing the bias term theta-0 =1 and the feature weights theta-1 to theta-n

x is the instance’s feature vector, containing x0 to xn, with x0 always equal to 1.

θ. x: is the dot product of the vectors θand X.

h(θ): hypothesis function, using the model parameter θ.

import numpy as np# input X
y=4+3*X+np.random.rand(100,1) # I have used y = Gaussian Noise

Apply MSE to reduce Loss

Let’s calculate: best θ (Normal Equation)

Now we have to find the value of Theta that minimizes the cost function, there is a closed-form solution also known as Normal Equation which can give you direct results.

# concatinate x0 =1 to each instance with X
theta_cap = np.lialg.inv(

Yes now we got our theta_cap now we can use this theta_cap for our prediction.

read here for derviation .

Now Let’s move for prediction

X_new=np.array([[0],[2]]) # create input
X_new_b=np.c_[np.ones(2,1)),X_new) # add x0=1 to each instance

y_predict will give you output as theta value


The computational complexity of inverting such a matrix is typically about O(n³) depending on implemetation.

once you have trained your Linear Regression model, prediction are very fast.

I hope you enjoyed my post flow me for more updates.