简体   繁体   中英

Cyclic Redundancy check : Single and double bit error

Found this in the book by Forouzan (Data Communications and Networking 5E). But, not able to understand the logic behind this.

This is in the context of topic two isolated single-bit errors

In other words, g(x) must not divide x^t + 1, where t is between 0 and n − 1. However, t = 0 is meaningless and t = 1 is needed as we will see later. This means t should be between 2 and n – 1

why t=1 is excluded here? (x^1 + 1) is two consecutive errors, it must also be detected right using our g(x).

Book screenshots - [2 isolated 1 bit error][1] [odd bit error][2] [summary][3]

[1]: https://i.stack.imgur.com/INo1x.png
[2]: https://i.stack.imgur.com/CzO6r.png
[3]: https://i.stack.imgur.com/6MfRm.png

The third image states that (x+1) should be a factor of g(x), but this reduces the maximum length that the CRC is guaranteed to detect 2 bit errors from n-1 to (n/2)-1, but it provides the advantage of being able to detect 3 bit errors such as (x^k + x^j + x^i) where k+j+i <= (n/2)-1.

Although the book recommends using (x+1) as a factor of g(x), this doesn't mean that a good g(x) can divide e(x) = x+1, since that would be a failure. This appears to be a mistake in the book.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM