Wednesday, 2 October 2013

Uncertainties in Measurements

Uncertainties in Measurements

Modified 23:40, 21 Feb 2013 by Delmar
All measurements have a degree of uncertainty regardless of precision and accuracy. This is caused by two factors, the limitation of the measuring instrument (systematic error) and the skill of the experimenter making the measurements (random error). 
 1. Introduction
    2. Systematic and Random Error
    3. Calculating Error
    4. Methods of Reducing Error
    5. Outside links
    6. References
    7. Problems
    8. Solutions

    Introduction

    As the illustration below demonstrates:
    Untitled.jpg
    (If you look at the slight curve the meniscus makes, the lowest part is where you actually record)
    The graduated cylinder in the picture contains a certain amount of water to be measured. The amount of water is somewhere between 40ml and 50ml according to the marked lines. By checking to see where the botton of the meniscus lies, referencing the ten smaller lines, the amount of water lies between 44ml and 45ml. The next step is to estimate the uncertainty between 44ml and 45ml. Making an approximate guess, the level is less than 44.5 ml but greater than 44.0 ml. We then report that the measured amount is approximately 44.1ml. The graduated cylinder itself may be distorted such that the graduation marks contain inaccuracies providing readings slightly different from the actual volume of liquid present. 

    Systematic and Random Error

    When we use tools meant for measurement, we assume that they are correct and accurate, however measuring tools are not always right. In fact, they have errors that naturally occur called systematic errors. An example of a systematic error is a weighing scale. There are two specific types of systematic errors:
    • Offset or Zero setting errors: When the measuring tools cannot read zero while the quantity measure is zero.
    • Multiplier or Scale Factor Error: When the measuring tools consistently give readings to changes that are greater or smaller than the actual change.
    weighing scale (3).JPG
    Random error: Sometimes called human error, random error is determined by the experimenter's skill or ability to perform the experiment and read scientific measurements. These errors are random since the results yielded may be too high or low. Often random error determines the precision of the experiment or limits the precision. For example, if we were to time a revolution of a steadily rotating turnable, the random error would be the reaction time. Our reaction time would vary due to a delay in starting (an underestimate of the actual result) or a delay in stopping (an overestimate of the actual result).
    Random vs Systematic Errors

    systematic vs random.gif
    In this experiment a series of shots is fired at a target. Random  errors are caused by anything that makes the shots inconsistent and arrive at the target at random different points. For example, the shooter has an unsteady hand or a change in the environment may distort the shooter's view. These errors would result in the scattering of shots shown by the right target in the figures to the left. A systematic error, on the other hand, would include consistent errors that always arise. For example, the gun may be misaligned or there may be some other type of technical problem with the gun. This type of error would yield a pattern similar to the left target with shots deviating roughly the same amount from the center area. 


     Measuring pencil.jpg
    When measuring a defined length with a ruler, there is a source of uncertainty and the measurement may need estimation or rounding between two points. When doing this estimation, it is possible to over estimate and under estimate the measured value, meaning there is a possibility for random error. Also, the ruler itself may be too short or too long causing a systematic error. For example, the illustration to the right shows a pencil whose length lies between 25cm and 26cm. With an intermediate mark, the ruler shows in greater detail that the pencil length lies somewhere between 25.5cm and 26cm. Therefore, one may reasonably approximate that the length of the pencil is 25.7cm. The presence of a systematic error, however, would likely be more subtle than a random error because the environment may affect the ruler in a difficult to notice way or the ruler itself may have slightly inaccurate markers.
    Precision vs. Accuracy
    Precision is often referred to as reproducibility or repeatability. For example, consider the precision with which the golf balls are shot in the figures below. A set of shots that are only precise would mean you are able to cluster your shots near each other on the green but you are not reaching your goal, which would be to get the golf balls into the hole. This concept is illustrated in the left picture of the two figures below. Accuracy, on the other hand, is how close a value is to the true or accepted value. The picture to the right demonstrates accuracy showing that the balls all get into the hypothetically large hole but are all at different corners of the hole. Therefore, the shots are not precise since they are relatively spread out but they are accurate because they all reached the hole. To sum up this concept, accuracy is the ability to hit the desired target area or measured value while precision is the agreement of shots or measured values with each other but not with the intended target or value. 
    precision vs accuracy.JPG

    Calculating Error

    Since equipment used in an experiment can only report a measured value with a certain degree of accuracy, calculating the extent to which a measurement deviates from the value accepted by the scientific community is often helpful in gauging the accuracy of equipment. Such a calculation is referred to as the percent error of a measurement and is represented by the following formula: 
    Percent Error = [(Experimental Result - Accepted value) / Accepted Value] X 100%
    Consider this real world example to understand the role of percent error calculations in determining the accuracy of measuring equipment: A toy company that ships its products around the world must calculate fuel costs associated with transporting the weight of their standard 2 by 3 foot box. To predict shipping costs and create a reasonable budget, the company must obtain accurate mass measurements of their boxes. The accepted mass of a standard box is 0.525 kg. The company measures a sample of three dozen boxes with a sophisticated electronic scale and an analog scale each yielding an average mass of 0.531 kg and 0.49 kg, respectively. A calculation of percent error for each device yields the following results: 
    Percent Error of Electronic Scale = [(0.531kg - 0.525kg) / 0.525kg] X 100% = 1.14 %
    Percent Error of Analog Scale = [(0.49kg - 0.525kg) / 0.525kg] X100% = -6.67%
    Immediately, one notices that the electronic scale yields a far more accurate measurement with a percent error almost six times lower than the measurement obtained from the analog scale. Also note that percent error may take on a negative value as illustrated by the calculation for the analog scale. This simply indicates that the measured average lies 6.67% below the accepted value. Conversely, a positive percent error indicates that the measured average is higher than the accepted value. 

    Methods of Reducing Error

    While inaccuracies in measurement may arise from the systematic error of equipment or random error of the experimenter, there are methods that can be employed to reduce error:
    Weighing by difference: Mass is an important measurement in many experiments and it is critical for labs to reduce error in mass measurements whenever possible. A simple way of reducing the systematic error of electronic balances commonly found in labs is to weigh masses by difference. This procedure entails the following:
    1) finding the mass of both the desired material and the container holding the material,
    2) transferring an approximate amount of the material to another container,
    3) remeasuring the mass of the original container, and
    4) calculating the mass of the removed sample by taking the difference between the initial and final weights of the original container. The following formula illustrates the procedure used for weighing by difference: 
    (mass of container + mass of material) - (mass of container + mass of material after removing material) = mass of removed material
    While most electronic balances have a "tare" or "zero" function that allows one to automatically calculate a mass by difference, equipment can be faulty so it is important to remember the fundamental logic behind weighing by difference.
    Averaging Results: Since the accuracy of measurements are limited in part to the capacity of an experimenter to interpret their equipment, it makes sense that the average of several trials would be taken rather than a single trial. The reasoning behind averaging results is that an error of a measured value that falls below the actual value may be accounted for by averaging with an error that is above the actual value. By performing a series of trials (the more trials the more accurate the averaged result), an experimenter can account for some of their random error and yield a measurement with higher accuracy. 
    Calibrating Equipment: Just as random error can be reduced by averaging several trials, systematic error of equipment can be reduced by calibrating a measuring device. This usually entails comparing a standard device of well known accuracy to the second device requiring calibration. Additionally, procedures exist for different kinds of equipment that can reduce the systematic error of the device. For example, a typical buret in a lab may be used to carry out a titration involving neutralization of an acid and base. If the buret formerly held acid but must now hold a base, then it would benefit the experimenter to condition the buret with the base before carrying out the titration so that the buret may acclimate to the new substance and provide a more accurate reading. Such procedures, together with calibration, can reduce the systematic error of a device. 

    No comments:

    Post a Comment