简体   繁体   English

在python中模拟UDP中的数据包丢失

[英]Simulate packet loss in UDP in python

I am supposed to simulate a packet loss rate of 10^-2 in a Stop-and-wait protocol, that is 0.01, which means that 1 out of 100 packets transmitted get lost. 我应该假设在Stop-and-wait协议中将数据包丢失率模拟为10 ^ -2,即0.01,这意味着传输的100个数据包中有1个丢失了。 Suppose I'm sending 1000 packets, how to drop exactly 1 random packet out of the 100 packets sent throughout the transmission ? 假设我要发送1000个数据包,如何在整个传输过程中发送的100个数据包中准确地丢弃1个随机数据包?

Having a rate of 0.01 doesn't mean that exactly 1 out of 100 packets gets dropped. 速率为0.01并不意味着完全丢弃了100个数据包中的1个。 It means that each packet has a 1% chance of getting lost. 这意味着每个数据包都有1%的丢失机会。 Under the assumption that losses are independent of each other, the actual number of lost packets will follow a binomial distribution . 在损失彼此独立的假设下,实际丢失的数据包数量将遵循二项式分布

For each packet you generate, check if a random Uniform(0,1) is less than or equal to the proportion of losses p , in your case 0.01. 对于生成的每个数据包,检查随机Uniform(0,1)是否小于或等于损耗p的比例,在您的情况下为0.01。 If it is, that packet is lost, otherwise it goes through. 如果是,则该数据包丢失,否则它将通过。 This approach scales if you increase or decrease the N , the total number of packets. 如果增加或减少数据包总数N ,则此方法可扩展。 The expected number of losses will be N * p , but if you repeat the experiment multiple times there will be variability. 预期的损失数将为N * p ,但是,如果您重复多次实验,将存在可变性。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM