Audio sample rates are generated by oscillators that can be slightly off from their nominal frequencies. For most applications these small deviations are not a problem. But for some applications, like precise musical instrument tuning, the small variations in the sample rate are significant. For such applications it is usual to require an initial calibration operation where a known audio frequency is used to discover the actual sample rate in the phone. Fortunately, while there may be significant variation from phone to phone, the audio sample rates in a single phone are quite stable. So once a calibration has been done and stored for a given phone, that calibration will remain valid for the life of the phone.
My question has to do with input sample rates and output sample rates. You would think that these rates would be identical on a given phone (provided the sample rate is valid for both input and output). But it has been my experience with Windows Mobile devices that this is not always the case. Some Windows Mobile devices have input and output sample rates that seem to be independent, so that a separate calibration step is needed for audio input and for output. This is a major inconvenience to the user of an application that depends on precise sample rates. It would be simpler if I could calibrate for the input sample rate and then just assume that the output sample rate is the same. I have not had the opportunity to measure the precise sample rates on a wide variety of Nokia phones, so that is my question. Can anyone confirm or deny that Nokia phones have audio input and output sample rates that are locked together, so that if you know one rate precisely, then you know the other rate too?