Cost Function

Cost Function

We can measure the accuracy of our hypothesis function by using a cost function. This takes an average difference (actually a fancier version of an average) of all the results of the hypothesis with inputs from x’s and the actual output y’s.

J(theta_0, theta_1) = dfrac {1}{2m} displaystyle sum _{i=1}^m left ( hat{y}_{i}- y_{i} right)^2 = dfrac {1}{2m} displaystyle sum _{i=1}^m left (h_theta (x_{i}) – y_{i} right)^2J(θ0,θ1)=2m1i=1∑m(y^i−yi)2=2m1i=1∑m(hθ(xi)−yi)2

To break it apart, it is frac{1}{2}21 bar{x}xˉ where bar{x}xˉ is the mean of the squares of h_theta (x_{i}) – y_{i}hθ(xi)−yi , or the difference between the predicted value and the actual value.

This function is otherwise called the “Squared error function”, or “Mean squared error”. The mean is halved left(frac{1}{2}right)(21) as a convenience for the computation of the gradient descent, as the derivative term of the square function will cancel out the frac{1}{2}21 term. The following image summarizes what the cost function does:

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © 2023 @rajeevranjansinha.com