简体   繁体   中英

What does Android's horizontalAccuracy for GPS really mean?

I cannot figure out what horizontalAccuracy is meant to represent on Android. The docs say the following:

We define horizontal accuracy as the radius of 68% confidence. In other words, if you draw a circle centered at this location's latitude and longitude, and with a radius equal to the accuracy, then there is a 68% probability that the true location is inside the circle.

It sounds like they are suggesting that the probability distribution is Gaussian in the radius (call it r from now on), such that 68% of the data is within 1 standard deviation of the mean. The problem is, this is impossible since:

  1. r can only ever be positive

  2. It should be the case that P(r=0)=0, since the area at r=0 is vanishingly small and therefore so is the probability of being there

It sounds like the distribution is a 2-D Gaussian, that is to say a function like exp( -(x^2 + y^2) / sigma^2). But this function then becomes a Rayleigh distribution in the radius r, not a Gaussian.

Therefore, I see 4 different possibilities:

  1. The distribution is actually Gaussian in r, and the accuracy is a standard deviation which does contain 68% of the data - as far as I can tell this is not just nonsensical but also completely impossible for the reasons stated above

  2. The accuracy threshold is the point within which 68% data is contained for the Rayleigh distribution , and therefore it is not a standard deviation since this distribution is not a Gaussian - this would agree with what the Android devs say, but it seems like really quite a bizarre and arbitrary measure of accuracy and therefore it seems really quite unlikely to me

  3. The accuracy threshold is the standard deviation of the Rayleigh distribution and therefore doesn't contain 68% of data - this could make sense since standard deviation is a common measure of accuracy, but it disagrees with what the Android devs say since it doesnt contain 68% of data. It seems possible but I'd be assuming that Google employees are making quite basic mistakes, which then went completely unnoticed for years - this seems unlikely

  4. The accuracy threshold is actually the standard deviation of the 2-D Gaussian in x,y (ie sigma in the equation exp( -(x^2 + y^2) / sigma^2)) and therefore does contain 68% of points in each of x,y, but not in r. In this case, Android devs would have just incorrectly assumed that the standard deviation in r would be the same as in x,y - once again, this could make sense, but only if we assume that entire Google teams are making quite basic mathematical errors, which have then gone unnoticed by the community for years. Again, I am somewhat reluctant to assume this.

I have spent a while looking through the docs and around the web to little avail. I have seen every one of options 1-4 offered by different individuals but I find all of them difficult to believe and cant see any reason to pick one over any other.

Any thoughts on what might make sense? Any potential alternatives to the four options? Any resources I could look through? Any suggestions on talking to Google about this? I have heard from other programmers that reaching the dev teams can be really quite difficult

Having taken a few measurements, my suspicion is that the correct answer is

  1. The accuracy threshold is the point within which 68% data is contained for the Rayleigh distribution

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM