The following is a brief history and usage of personal noise dosimeters.
1960s
The original OSHA noise regulations CFR 1910:95 in 1969 introduced the requirement to measure the time weighted average noise level over a standard working day to quantify the likelihood of a worker being at risk of measurable hearing loss while at work. Research had established that the higher the noise levels and the longer the exposure time, the greater the risk of workers’ hearing loss.
But how was this new risk going to be measured with the available technology at the time? The standard instrument was the sound level meter, an analog device, possessing a limited dynamic range and very limited measuring capability.
In a noisy environment the attenuator setting was set to start at 90 dB, the suspected level at which noise contributes to hearing loss. With the limited display range of maybe only 20 or possibly 30 dB on scale, a very compressed observable range was available to the user.
Imagine the difficulty of following the rapidly changing needle across an analog display and trying to eyeball the average of a relatively steady signal, let alone a highly variable signal. This made the traditional sound level meter a difficult tool to use. This led a number of electronic manufacturers to enter the health and safety market with designs that produced direct-reading instruments and had the characteristics of the standard sound level meter, yet calculated the average answer another way.
1970s
The 1970s saw the introduction of electronics and early microprocessors to the general market. Some of the newer manufacturers in the safety field realized that these components could be used to provide a direct reading of the TWA as a single number on a display. The physical designs were also changed to locate the microphone on the end of a short cable that could be located on the subject’s shoulder while the measuring instrument was mounted on a belt or in a pocket. The dawn of the dedicated personal noise dosimeter had arrived.
It was relatively easy to make the dosimeters behave like a sound level meter as long as the settings could be designed to simulate the correct frequency weighting, the detector response, the exchange rate for the increasing risk factor of the higher levels, and the cutoff of the low levels that were not needed. A greater challenge to design and program was the user interface, where a display screen showed a single measurable result.
Further development of the dosimeter design soon added the concept of data logging, or storing samples of the noise level at regular intervals throughout the day. Lack of adequate memory at the time manifested itself in the storage of samples taken only at one-minute intervals throughout the day. The first generation of noise dosimeters connected to dot-matrix printers to have a hard copy of the time/history results.
1990s
The inexorable march of progress gave us dosimeters with multiple setup characteristics during the 1990s, such that a single instrument could be configured to measure according to the requirements of both the original OSHA permitted exposure level and the hearing conservation settings with the reduced cutoff at 80 dB that were introduced in the mid-1970s. Previously, it had been a case of choosing one or the other of the settings and repeating the measurement on another day if you wanted to obtain both sets of results for the same worker.
The rise of the personal computer also came about in the 1980s and 1990s, and noise dosimeters manufacturers took advantage of the fact that multiple results could now be stored in memory and downloaded to specific computer programs to produce exposure reports on the individual wearers of the instruments.
2000s
The physical size of the dosimeters and the problems associated with microphone cables getting damaged was significantly reduced with the introduction of the “all-in-one” shoulder-mounted dosimeters in the early 2000s. Early designs of the smaller dosimeter left off the display due to security and space restrictions but users still asked for instruments to be autonomous on site without needing to download to a personal computer first. Some dosimeters had small monochrome displays to read out the key results on screen.
2010-18
Further enhancements to the design of shoulder-mounted dosimeters has led to a new generation of an all-in-one device featuring full color OLED graphic displays, three or more virtual independent dosimeter setups that operate simultaneously. Reduction in memory costs has greatly improved storage capacity, allowing dosimeters to provide years of raw data storage even at one second sampling rates.
Advances in digital signal processing technology have allowed recording the audio content of the sound above user selectable levels for source identification during unattended surveys. It has also allowed the collection of regular octave band spectra at the same time as the TWA and the noise dose results to further aid in the prescription of suitable hearing protectors or noise control activities all from the same measurement. No need to make multiple measurements. Simply collect all necessary data the first time.