I am looking for a way to draw a random float from a probability distribution of which the mean or loc has the value of a node in a model. On top of this, the numbers drawn should not exceed the range of [-1.0, 1.0] and should not have more than 1 decimal.
So if the value of the node is lets say 0.8, loc should be 0.8, but values outside 1 can't be drawn. I'm really new to programming so if anyone can give me any tips on whether this is possible to begin with it would be much appreciated. With normal([loc, scale, size] its not possible I think. Thanks in advance
Thanks so much for your answer. My bad, I meant 2 decimals! Which makes the solution different to your suggestion I guess because then the the amount of numbers increases quite a lot and maybe the approach is unfeasable?
Right now I have this:
def randomnumber(loc, scale): return np.random.normal(loc, scale, size=None)
elif node == 'is_po':
for neig in graph.predecessors(node):
neig_w = graph.edges[neig, node]['weight']
neig_s = graph.node[neig]['status'][t - delta_t]
loc = neig_s
scale = 1
c = randomnumber(loc, scale)
graph.node[node]['status'][t] = c * delta_t
Which seems to give me a random number back drawn around the value of neig_s, but I don't know if it is possible to make sure that the random numbers drawn dont come from above 1 or below -1.
I am assuming that you want a normally distributed number with scale=1. A simple solution is as follows:
import numpy as np
def func(loc):
max_iter = 1000
x = np.random.normal(loc)
c = 0
while c < max_iter:
if x > -1 and x < 1:
return x
c += 1
x = np.random.normal(loc)
print('max iter exceeded')
Be aware that if you give high or low value for the loc parameter then the while loop will run forever therefore I set a limit 'max_iter'.
If you want a more advanced solution then you have define a truncated normal distribution (you can do this in scipy).
Given that you want one decimal place for your results, what you're describing isn't a continuous distribution on the range [-1,1] but rather a discrete distribution scaled to that range. There are 21 allowable values (-1.0, -0.9, -0.8,..., 0.8, 0.9, 1.0), so one approach is to use a discrete distribution which yields results on the range [0,...,20]. You would then scale and translate your target
mean to its corresponding scaled_target
value between 0 and 20, generate a value from some distribution with that mean, and scale the result back to the range [-1,...,1].
Forward scaling is accomplished via the relationship scaled_target = 20 * (target + 1) / 2
. For instance, target = 0.8
would yield scaled_target = 18
, so you would generate values between 0 and 20 with a mean of 18. You then scale back into the range [-1,...,1] by subtracting 10 from the outcome and dividing by 10.
One easy-to-use distribution would be the binomial with n = 20
to yield the desired range. Since the mean of a binomial is n * p
and you want a mean of 18, you would use p = 0.9
, which can be derived directly as (target_mean + 1.0) / 2.0
— no need to multiply by 20 and then divide by 20.
The forward and reverse scaling take only a couple of lines of code, and you can use numpy
(or scipy
if you prefer) to generate the binomial distribution:
import numpy
def generate_value(target_mean):
scaled_target = (target_mean + 1.0) / 2.0
return (numpy.random.binomial(n = 20, p = scaled_target) - 10.0) / 10.0
Sample output:
print([generate_value(target_mean = 0.8) for _ in range(10)]) # => [0.8, 0.9, 0.5, 0.7, 0.7, 0.9, 0.5, 0.8, 1.0, 0.8]
print([generate_value(target_mean = 0.0) for _ in range(10)]) # => [-0.1, 0.1, 0.2, 0.0, -0.3, -0.4, -0.2, -0.1, -0.2, 0.3]
If you want a broader range of outcomes, you'll need to pick a different discrete distribution, but the approach generalizes pretty straightforwardly.
ADDENDUM
Revising this to two decimal places does not change the structure of the approach, only the scaling:
def generate_value(target_mean):
scaled_target = (target_mean + 1.0) / 2.0
return (numpy.random.binomial(n = 200, p = scaled_target) - 100.0) / 100.0
If the values generated with a simple binomial are too clustered for you, you can replace it with a beta-binomial distribution by dynamically generating the binomial's p
using a beta distribution with α scaled to yield outcomes with an expected value of scaled_target
and β scaled to yield an appropriate dispersion of the outcomes:
import numpy
BETA_SHAPE = 2.0 # larger values will yield more clustered outcomes
def generate_value(target_mean):
scaled_target = (target_mean + 1.0) / 2.0
alpha = BETA_SHAPE * scaled_target / (1.0 - scaled_target)
beta_p = numpy.random.beta(a = alpha, b = BETA_SHAPE)
return (numpy.random.binomial(n = 200, p = beta_p) - 100.0) / 100.0
Sample output:
lst = [generate_value(target_mean = 0.7) for _ in range(10000)]
print(numpy.mean(lst)) # => 0.695812
print(min(lst)) # => -0.37
print(max(lst)) # => 1.0
The specific choice of distribution is up to you, but the approach is a general one. This also gives the precise mean you're interested in, while answers based on truncating or acceptance/rejection will shift the mean.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.