[英]Randomly sample from multiple tf.data.Datasets in Tensorflow
suppose I have N tf.data.Datasets and a list of N probabilities (summing to 1), now I would like to create dataset such that the examples are sampled from the N datasets with the given probabilities. 假设我有N个 tf.data.Datasets和N个概率列表(求和为1),现在我想创建数据集,以便从具有给定概率的N个数据集中采样示例。
I would like this to work for arbitrary probabilities -> simple zip/concat/flatmap with fixed number of examples from each dataset is probably not what I am looking for. 我希望这适用于任意概率 - >简单的zip / concat / flatmap以及每个数据集中固定数量的示例可能不是我想要的。
Is it possible to do this in TF? 有可能在TF中这样做吗? Thanks!
谢谢!
As of 1.12, tf.data.experimental.sample_from_datasets
provides this functionality: https://www.tensorflow.org/api_docs/python/tf/data/experimental/sample_from_datasets 从1.12开始,
tf.data.experimental.sample_from_datasets
提供此功能: https : tf.data.experimental.sample_from_datasets
EDIT: Looks like in earlier versions this can be accessed by tf.contrib.data.sample_from_datasets
编辑:在早期版本中,这可以通过
tf.contrib.data.sample_from_datasets
访问
if p
is a Tensor
of probabilities (or unnormalized relative probabilities) where p[i]
is the probability that dataset i
is chosen, you can use tf.multinomial
in conjunction with tf.contrib.data.choose_from_datasets
: 如果
p
是概率Tensor
(或非标准化相对概率),其中p[i]
是选择数据集i
的概率,则可以将tf.multinomial
与tf.contrib.data.choose_from_datasets
结合使用:
# create some datasets and their unnormalized probability of being chosen
datasets = [
tf.data.Dataset.from_tensors(['a']).repeat(),
tf.data.Dataset.from_tensors(['b']).repeat(),
tf.data.Dataset.from_tensors(['c']).repeat(),
tf.data.Dataset.from_tensors(['d']).repeat()]
p = [1., 2., 3., 4.] # unnormalized
# random choice function
def get_random_choice(p):
choice = tf.multinomial(tf.log([p]), 1)
return tf.cast(tf.squeeze(choice), tf.int64)
# assemble the "choosing" dataset
choice_dataset = tf.data.Dataset.from_tensors([0]) # create a dummy dataset
choice_dataset = choice_dataset.map(lambda x: get_random_choice(p)) # populate it with random choices
choice_dataset = choice_dataset.repeat() # repeat
# obtain your combined dataset, assembled randomly from source datasets
# with the desired selection frequencies.
combined_dataset = tf.contrib.data.choose_from_datasets(datasets, choice_dataset)
Note that the dataset needs to be initialized (you can't use a simple make_one_shot_iterator): 请注意,需要初始化数据集(不能使用简单的make_one_shot_iterator):
choice_iterator = combined_dataset.make_initializable_iterator()
choice = choice_iterator.get_next()
with tf.Session() as sess:
sess.run(choice_iterator.initializer)
print ''.join([sess.run(choice)[0] for _ in range(20)])
>> ddbcccdcccbbddadcadb
我认为您可以使用tf.contrib.data.rejection_resample
来实现目标分发。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.