Watch these six (6) videos in order. The total video time is 26 min. Be sure to test your knowledge with the knowledge checks after each video.
Description: This video discusses using linear interpolation, sometimes called false position method, to break the bounds of root into two uneven sections.
Hello, this is video NLE|VL|06 and it will be the theory of linear interpolation for solving non-linear equations.
Let's first talk about the principle. Let's take a function that crosses the x-axis. Next, we will have two points xL and xUthat bind the root. The lower graph shown here represents the bounds, so it can be visually seen. Linear interpolation works by drawing a line between xL and xU and with that line checking where it crosses the x-axis, this point is your estimate xR. xR can be calculated as xL - f(xL)*(xU - xL)/(f(xU)-f(xL)). Now that we have xR we can find f(xR) now we've split our bounds into two regions between xL and xR and between xR and xU. We can now test if f(xL)*f(xR) <0 and if it is less than zero then the root is between xL and xR and if not then it's between xR and xU. In this example you'll notice that xR is actually the middle between xL and xU. However, this is all dependent on the values of f(xL) and f(xU). Normally xR does not divide the bounds between xL and xU evenly and this can make it more efficient than bisection method.
Let's look at this in more detail with some graphical steps. Let us first draw a function that crosses the x-axis. Let's take two points (xL, f(xL)) and (xU, f(xU)). We can then apply the formula of xR for linear interpolation. And then we can test the bounds to determine where the next root will be. In this case, the function crosses between xR and xU and so we adjust our bounds between xR and xU. These become our new bounds xL and xU. We use our formula for xR to determine our next estimate of xR, which is where the line between xL and xU crosses the x-axis and we can test our bounds again. In this case, you'll notice that xR is not in the middle of xL and xU and therefore our bounds are not even. Again, the function crosses between xR and xU and so we just we keep iterating through the same process until we are satisfied that we are close enough to the root. For example, after six iterations, the curve has now become quite straight. We can estimate the root as the last xR, which in this case is 1.6555. And if you look at the f(xR), it is quite close to 0.
Let's now look at the algorithm.
This concludes the video. This project could not have been completed without the support of eCampusOntario and the University of Ottawa.
Description: This video implements the linear interpolation method to solve a problem in a step-by-step procedure.
Hello this is video NLE|VL|07 and it will be a step-by-step example of linear interpolation for solving non-linear equations. First, let's look at the problem. Solve the following equation ex = x2 and stop iterating when your error εa is less than 1%. For this problem we'll use our bounds as xL = -1 and xU = 1.
Step one is first rearrange the equation we take ex = x2 and put it on the same side. So we have f(x) = ex - x2, which is of course equal to 0. Now we will do the step-by-step calculations with the following layout in the top lefthand corner will be the graphical representation of the solution. On the top right side there will be the important equations that will be used to solve this problem. On the right side is a graph showing how the estimate of the error, εa, will be changing over iterations. On the bottom is a table of all the important calculations that are done for each iteration.
Let's look at the important equations first. We have the function f(x) = ex – x2. Next, we have how we can calculate xR which is xR = xL – f(xL)*(xU-xL)/(f(xU)-f(xL)). Finally, we have the estimate of the error, εa, which is equal to |(xRi- xRi-1)/ xRi|. Let's first graph the function with the bounds chosen. You'll notice in the top left graph that the function does cross the x-axis as expected, so we can start our first iteration with our bounds, xL = -1 and xU = 1. We can use our f(x) function to determine f(xL) and f(xU). Now we can use our xR equation to determine the first estimate of the root. The first estimate of xR is -0.46 we can now calculate f(xR) using our function f(x). Normally we can calculate the estimate of the error, but in this case we only have one estimate of xR and so we cannot do it. Our next step is to determine the bounds. We can do this by testing if f(xL)*f(xR) <0. In this case it is, and this means that our new bounds between xL and xR. Therefore, for the 2nd Iteration, xL = -1 and xU = -0.46212. We can now calculate. F(xL) and f(xU) as before and then calculate xR. With xR we can go on and calculate f(xR). And now with two xR’s we can calculate estimate of the error, which in this case is about 32%. So now we can calculate where our new bounds are using f(xL)*f(xR) and determining if that is less than zero. In this case it is and so our new bounds will be xL and xR. So starting with Iteration 3, you'll notice in the top left hand corner that the graph has become more linear, which works well for linear interpolation, and we can use the same steps as before using xL and xU to calculate f(xL) and f(xU). And then with that we can calculate our xR and now that we have xR we can calculate f(xR). We can look at our error in this case and our error is now dropped to about 3.5%, so we're not below the 1% yet. So, we will test our bounds one more time and we will notice that it is still within xL and xR and continue on to the next iteration. So, we will continue the same steps with the 4th iteration and the xL and xU, and what you'll notice is now the approximation of the error εa is less than 1%. So now that we know that we can stop, we can take our final xR as our estimate of the root, and in this case that is -0.703 and we've now solved this problem within 4 iterations.
This concludes the video. This project could not have been completed without the support of eCampusOntario and the University of Ottawa.
Description: This video discusses how to improve the linear interpolation algorithm to help reduce the number of iterations for ill-behaving systems.
Hello, this is video NLE|VL|08 and it will be the theory of modified linear interpolation for solving nonlinear equations.
Let's first discuss the principle of modified linear interpolation. We will first discuss a limitation to linear interpolation. Suppose you have a function where one of the bounds is much much greater than the other. This can happen at, for example, at an asymptote. If we follow linear interpolation, we can draw a line between xL and xU and where that line crosses the x-axis is our next guests of xR and we can keep doing this for a few iterations. What you'll notice is that one of the bounds in this case, the upper bound, always stays the same and it takes a long time to converge towards the root. Modified linear interpolation works a little differently. Say we take the same function and the same initial bounds. The start of modified interpolation will look the same as linear interpolation. You draw a line between xL and xU and where that crosses the x-axis is where your next estimate of the root is. The 2nd iteration will also look very similar to linear interpolation. However, now is when the two methods differ. Modified linear interpolation keeps track of the number of times a bound stays the same, and if it stays the same for more than twice in a row, then it will divide the value f at that bound by two. It will then draw a line between the two bounds and where that line crosses the x-axis will be the next estimate of the root on the next iteration. If one of the bounds did not change, then it will divide by another half, which means divide by 1/4 from the original bound. You should notice with the same number of iterations modified linear interpolation is closer to the root than linear interpolation. This means that modified linear interpolation converges faster. One thing to note. It is the f(x) value of that bound that is divided by two, not the x value. If you did that then you will close your bounds, you might jump over the root. In the example above it was xU that was bound to one spot. However, it could also be the lower bound, xL, and in some cases it could be either xL or xU depending on the function.
Let's look at the algorithm and how it differs from linear interpolation.
This concludes the video. This project could not have been completed without the support of eCampusOntario and the University of Ottawa.
Description: This video implements the modified linear interpolation method to solve a problem in a step-by-step procedure.
Hello, this is video NLE|VL|09 and it is a step-by-step example of modified linear interpolation.
First, let's look at the problem. Solve the following equation ex = x2 and stop iterating when your error, εa is less than 1%. For this problem we'll use our bounds as xL = -1 and xU = 1. Step one is to first rearrange the equation. We take ex = x2 and put it on the same side. So we have f(x) =ex - x2, which is of course equal to 0. Now let's see how we're going to calculate this. In each step we will use the following layout. In the top left-hand corner will be the graphical representation of the solution. On the top right side there will be the important equations that will be used to solve this problem. On the right side is a graph showing how the estimate of the error εa will be changing over iterations. On the bottom is a table of all the important calculations that are done for each iteration. Let's first look at our important equations. The first equation is the function f(x) =ex - x2. The second equation is the linear interpolation equation, xR = xL – f(xL)(xU-xL)/(f(xU)-f(xL)). And finally the last equation is the estimate of the error, εa which is equal |(xRi - xRi-1)/xRi|. Let's draw the graph between the given bounds. You'll notice that the function does cross the x-axis, so the root is bound correctly. We can now do the first iteration where xL = -1 and xU = 1 and we can use the function f(x) to determine f(xL) and f(xU). We can now use the equation for linear interpolation to determine xR and with xR we can calculate f(xR). We normally calculate the error, εa, but since we only have 1 approximation of xR, we cannot do it for the first iteration. We can now test our bounds f(xL)*f(xR) and determine if that is less than zero. And in this case it is so that means that the resides between xL and xR, and therefore we will change our bounds for the next iteration. So our values of xL = -1 and xU = -0.462, which means that we can calculate our f(xL) and f(xU) using our formula and we can calculate xR using the linear interpolation formula. Then we can calculate f(xR) and get an estimate of our error, which in this case is approximately equal to 32%. And now we can determine whether the root is between xL and xRand it is within those two bounds, which means that xL did not change within two iterations. This means that we will be implementing the modified interpolation on the next iteration. On the 3rd iteration we will have our xL and xU and then we will calculate our f(xL) and f(xU) where you'll notice that even though xL stays the same for the first three iterations, minus one for each of them, f(xL) actually halved between iteration 2 and 3. This is also visible in the graph in the top left corner. Now we continue the same procedure by calculating xR. Then we can calculate f(x ) and again finally are estimate of the error. We can test for the bounds now and in this case is between xR and xU and so we can start our 4th iteration. So now we have our new points xL and xU, which means that we can calculate our f(xL) and f(xU) and what you'll notice is these are the true values and we're not dividing either f(xL) or f(xU) in half. And now we can continue doing the same procedure until our error is less than 1%. This happens after five iterations. The best estimate of the root is taken as a last xR, which in this case is equal to -0.703. It is interesting to note that linear interpolation only took 4 iterations to get to this point. Generally speaking, modified linear interpolation will converge faster than linear interpolation. However, in this case on the 3rd iteration the points were quite linear, so when f(xL) was divided by two, we actually got a less accurate estimate of the root. However, a 1 iteration difference is quite minimal in this case.
This concludes the video. This project could not have been completed without the support of eCampusOntario and the University of Ottawa.
Description: This video discusses a method that systematically breaks the bounds of the root into three regions to have an efficient approximation of the root.
Hello, this is video NLE|VL|10 and it discusses the theory of Ridders’ method for solving nonlinear equations.
First, we'll discuss the general principle behind Ridders’ method. Let's start by drawing the function f(x) that crosses the x-axis. We then choose our lower and upper bounds xL and xU such that the function will cross the x- axis between these bounds. We could then find the midpoint XM, which is the midpoint between xL and xU by taking (xL + xU)/ 2. This is similar to the bisection method. Ridders proposed .to do a transformation on the function using the following formula, let H(x)= f(x)emx. So with this function, if f(x)=0, so too will H(x). However, the value of m must be determined to do this. Another constraint was used and that is assume that H(xM) is equal to the midpoint between H(xL) and H(xU) with the three points and this constraint m can actually be solved and the function H(x) can be plotted knowing that xM is the average between xL and xU and H(xM) is the average between H(xL) and H(xU). The points at xL, xM, xU should form a straight line. Therefore, linear interpolation can be used on those three points to determine a good estimate of the root. This may seem like a lot of steps to find an estimate of the root. However, Ridders proposed a single equation based off of xL, xM and xU. The equation for xR =xM +(xM-xL)*Sgn(f(xL)-f(xU))*f(xM)/√(f2(xM)-f(xL)*f(xU)). This may seem like a large equation; however, it removes the steps of finding the m value as well as the function H(x). With xR and xM, we've now broken the original bounds between xL and xU into three regions. In the region shown, xR is greater than xM, so we can test different regions. For example, f(xL)*f(xM) is at less than zero, or is f(xR)*f(xU) < 0? And finally, is f(xM)*f(xR)<0? Only one of these subregions should be less than zero and that will be your new bounds for the next iteration. So it should be noted that xR is not always greater than XM, so you'll need to make sure the order of the points when you're checking the bounds. For example, this function has xR is less than xM.
Let's now look at the graphical steps for Ridders’ method. First, we'll graph a function that crosses the x-axis, and we'll choose two points that are on other side of the root, so xL as well as xU. You'll calculate the midpoint xM as (xL+xU)/2. Now we have three points at xL, xM, xU and we can use these three points to calculate xR. xR =xM +(xM-xL)*Sgn(f(xL)-f(xU))*f(xM)/√(f2(xM)-f(xL)*f(xU)). This will give us a value of xR that will be between xL and xU and now we can test the three different ranges to determine where the new bounds will be. In this case, between xR and xU, we will now have new bounds for xL and xU, and now we can repeat the process of determining the midpoint between xL and xU. And then we can calculate xR and once we have xR we have three regions again to determine where the root is crosses. In this case between xR and xM. So now we can set new bounds for xL and xU and we can continue the process for as many iterations as we'd like. What you'll notice for this method is that the method converges quite quickly and at the end the function is very linear when the iterations are complete. The best estimate of the root is xR, which in this case was 1.6573.
Let's now look at the algorithm for Ridders’ method.
This concludes the video. This project could not have been completed without the support of eCampusOntario and the University of Ottawa.
Description: This video implements Ridders’ method to solve a problem in a step-by-step procedure.
Hello, this is video NLE|VL|11 and it is a step-by-step example of Ridders’ method for solving non-linear equations.
First, let's look at the problem. Solve the following equation ex = x2 and stop iterating when your error, εa is less than 1%. For this problem we'll use our bounds as xL = -1 and xU = 1. Step one is to first rearrange the equation. We take ex = x2 and put it on the same side. So we have f(x) =ex - x2, which is of course equal to 0. Let's look at a step-by-step procedure of the calculations using the following layout. In the top left-hand corner will be the graphical representation of the solution. On the top right side there will be the important equations that will be used to solve this problem. On the right side is a graph showing how the estimate of the error εa will be changing over iterations. On the bottom is a table of all the important calculations that are done for each iteration. Let's now look at the equations. The first equation is f(x) = ex - x2. This is our function. Next is the large equation from Ridders’ method, which is xR =xM +(xM-xL)*Sgn(f(xL)-f(xU))*f(xM)/√(f2(xM)-f(xL)*f(xU)). We also have the estimate of the error, εa which is equal to |( xRi - xRi-1)/xRi|. One equation that's not shown is how to calculate xM. xM is the midpoint between xL and xU and quite easy to calculate. First, let's graph the function between the bounds given of -1 and 1. What you'll notice is that the function crosses the x-axis, so the root is correctly bound for the first iteration using the points xL and xU, which is -1 and 1. We can calculate the midpoint xM, which is equal to 0, and then we can use our function f(x) to calculate f(xL), f(xM) and f(xU). You now use Ridders’ equation to find xR. Then we can use the function again to find f(xR). We can usually calculate the approximate error, εa, but in this case since we only have one estimate of the root, we cannot do it. Next, we have to determine the interval or the root lies. Now with Ridders’ method it is more difficult because it goes into three points and xR could be less than xM. Or it could be greater. Visually, it's easy to see that the bounds here are going to be between xL and xR, so we have new bounds for our 2nd iteration. From the graph you can see that it's almost a straight line already, but we will continue with xL and our new xU. We can follow the same steps as before by calculating our xM, then f(xL), f(xM), and f(xU), which can be used to calculate xR and finally get our first estimate of the error. Our first estimate of the error is already 1.5%, so we're very close to being less than one. I want to draw your attention to the value of f(x). Already, it's actually quite quite close to 0 already. Since we're not less than 1%, we will continue on with a 3rd iteration. So we need to choose where our next interval will be and our next interval will be between xM and xR, so we will adjust the bounds for our 3rd iteration and we will continue through the same procedure until we get to the εa. In this case, the approximate error is way less than 1% and we could stop here; however, for the sake of looking at Ridders’ method, we will do one more iteration from the 4th iteration. You'll see that the εvalue is practically 0 as well as the f(xR) value, so we can take the best estimate of the root as the last value of xR, which in this case is -0.703 and we have now concluded this problem.
This concludes the video. This project could not have been completed without the support of eCampusOntario and the University of Ottawa.