Demosthenes
2007-02-26, 02:18 AM
The error when using a rule for approximating a definite integral is given by the following equations:
http://img83.imageshack.us/img83/1827/approxhh0.jpg
I have no idea how these were derived, but these were the equations given to us. We were told that we'd learn how they were derived in a numerical analysis class. This is all fine and good, but I'm not sure exactly how I'm supposed to pick the x value to calculate the coefficient k to find the largest error that there can be. The book offers a very convoluted explanation of this, and it almost seems to arbitrarily pick an x for the calculation of k at one of the limits of integration. This seems to make sense, but I'm not sure exactly how it picks whether to use the a value as x or the b value. If anyone could explain this to me, or link me to a site which has a decent explanation of this, I would greatly appreciate it.
http://img83.imageshack.us/img83/1827/approxhh0.jpg
I have no idea how these were derived, but these were the equations given to us. We were told that we'd learn how they were derived in a numerical analysis class. This is all fine and good, but I'm not sure exactly how I'm supposed to pick the x value to calculate the coefficient k to find the largest error that there can be. The book offers a very convoluted explanation of this, and it almost seems to arbitrarily pick an x for the calculation of k at one of the limits of integration. This seems to make sense, but I'm not sure exactly how it picks whether to use the a value as x or the b value. If anyone could explain this to me, or link me to a site which has a decent explanation of this, I would greatly appreciate it.