Linear regression is used for finding a linear relationship between the target and one or more predictors.
The core idea is to obtain a line that best fits the data. The best fit line is the one for which total prediction error (all data points) are as small as possible. An error is a distance between the point to the regression line.
An equation of Line, Y=WX+B.
Below are the steps to implement linear regression in python.
1.)Install tensorflow and matplotlib library in your machine
and import it into your program file.
import matplotlib.pyplot as plt
import tensorflow as tf
2.)Define two variable and two placeholders (will be passed at runtime)
w=tf.Variable([.3],tf.float32)
b=tf.Variable([-.3],tf.float32)
x=tf.placeholder(tf.float32)
y=tf.placeholder(tf.float32)
3.) Below code of line will find the total loss in linear regression and using a tf.reduce_sum function it will reduce the total error in the given model.
sq_delta=tf.square(l_model-y)
loss=tf.reduce_sum(sq_delta)
4.) First, it will initialize the global variable and train the model using a gradient descent approach. In this step, it's actually finding the optimal value for W and B.We are using machine learning approach to find a value of W and B.
init=tf.global_variables_initializer()
optimizer=tf.train.GradientDescentOptimizer(0.01)
train =optimizer.minimize(loss)
5.) Create session to store tensor value.
sess=tf.Session()
sess.run(init)
6.) Below code is to train a regression model with the actual value of X and Y.
for i in range(100):
sess.run(train,{x:[1,3,5,7],y:[0,1,2,3]})
7.) Below code will predict the value of Y of based on X with a trained model that we have created in step no 6.
print(sess.run(l_model,{x:[1,3,5,6]}))
Below it's a complete code and it's output.
import matplotlib.pyplot as plt
import tensorflow as tf
w=tf.Variable([.3],tf.float32)
b=tf.Variable([-.3],tf.float32)
x=tf.placeholder(tf.float32)
l_model=w*x+b
y=tf.placeholder(tf.float32)
sq_delta=tf.square(l_model-y)
loss=tf.reduce_sum(sq_delta)
init=tf.global_variables_initializer()
optimizer=tf.train.GradientDescentOptimizer(0.01)
train =optimizer.minimize(loss)
sess=tf.Session()
sess.run(init)
for i in range(100):
sess.run(train,{x:[1,3,5,7],y:[0,1,2,3]})
print(sess.run(l_model,{x:[1,3,5,6]}))
Output

Comments
Post a Comment