简体   繁体   English

重复的非素数分解

[英]non-prime factorings with some repeats

Let's say we have numbers factors, for example 1260: 假设我们有数字因子,例如1260:

>>> factors(1260)
[2, 2, 3, 3, 5, 7]

Which would be best way to do in Python combinations with every subproduct possible from these numbers, ie all factorings, not only prime factoring, with sum of factors less than max_product ? 在Python组合中,从这些数字中获取所有可能的子产品的最佳方式是哪一种,即所有因子,不仅是素因子,而且因子之和小于max_product

If I do combinations from the prime factors, I have to refactor remaining part of the product as I do not know the remaining part not in combination. 如果我根据主要因素进行组合,则必须重构产品的其余部分,因为我不知道其余部分是否不能组合。

I can also refine my divisors function to produce pairs of divisors instead of divisors in size order, but still it will cost me to do this for number with product upto 12000 . 我还可以优化除数函数,以生成成对的除数,而不是按尺寸顺序生成除数,但是对于产品数量最大为12000的数,仍然要花费我这样做。 The product must remain always the same. 产品必须始终保持不变。

I was linked to divisor routine, but it did not look worth the effort to prove them to adopt to my other code. 我与除数例程联系在一起,但是证明它们采用我的其他代码看起来并不值得付出努力。 At least my divisor function is noticably faster than sympy one: 至少我的除数函数明显比sympy之一快:

def divides(number):
    if number<2:
        yield number
        return
    high = [number]
    sqr = int(number ** 0.5)
    limit = sqr+(sqr*sqr != number)
    yield 1
    for divisor in xrange(3, limit, 2) if (number & 1) else xrange(2, limit):
        if not number % divisor:
            yield divisor
            high.append(number//divisor)
    if sqr*sqr== number: yield sqr
    for divisor in reversed(high):
        yield divisor

Only problem to reuse this code is to link the divisors to factoring sieve or do some kind of itertools.product of divisors of the divisors in pairs, which I would give out as pairs instead of sorting to order. 重用此代码的唯一问题是将除数链接到因数筛子上,或做某种itertools.divisor的除数的乘积成对出现,我将其成对给出,而不是按顺序排序。

Example results would be: 结果示例如下:

[4, 3, 3, 5, 7] (one replacement of two)
[5, 7, 36] (one replacement of three)
[3, 6, 14, 5] (two replacements)

Probably I would need some way to produce sieve or dynamic programming solutions for smaller divisors which could be linked to numbers whose divisors they are. 可能我需要一些方法来为较小的除数生成筛子或动态编程解决方案,这些解决方案可以与除数为数的数字链接。 Looks difficult to avoid overlap though. 看起来很难避免重叠。 I do have one sieve function ready which stores biggest prime factor for each number for speeding up the factoring without saving complete factorings of every number... maybe it could be adapted. 我确实准备好了一个筛子功能,该功能可以为每个数字存储最大的质因数,以加快分解速度,而不必保存每个数字的完整分解因数...也许可以进行调整。

UPDATE: The sum of factors should be near the product, so probably there is high number of factors of <= 10 in answer (upto 14 factors). 更新:因素的总和应接近乘积,因此答案中可能存在数量众多的<= 10个因素(最多14个因素)。

UPDATE2: Here is my code, but must figure out how to do multiple removals recursively or iteratively for parts > 2 long and dig up the lexical partitioning to replace the jumping bit patterns which produce duplicates (pathetic hit count only for one replacement, and that does not count the passing of 'single element partionings' inside the single_partition): UPDATE2:这是我的代码,但必须弄清楚对于> 2 long的部件,如何递归或迭代地执行多次删除,并挖掘词法分区以替换产生重复项的跳转位模式(可悲的命中次数仅一次替换,而那不计入single_partition中“单元素分割”的通过):

from __future__ import print_function
import itertools
import operator
from euler import factors

def subset(seq, mask):
    """ binary mask of len(seq) bits, return generator for the sequence """
    # this is not lexical order, replace with lexical order masked passing duplicates
    return (c for ind,c in enumerate(seq) if mask & (1<<ind))


def single_partition(seq, n = 0, func = lambda x: x):
    ''' map given function to one partition  '''
    for n in range(n, (2**len(seq))):
        result = tuple(subset(seq,n))
        others = tuple(subset(seq,~n))
        if len(result) < 2 or len(others) == 0:
            #empty subset or only one or all
            continue
        result = (func(result),)+others
        yield result


if __name__=='__main__':
    seen,  hits, count = set(), 0, 0
    for f in single_partition(factors(13824), func = lambda x: reduce(operator.mul, x)):
        if f not in seen:
            print(f,end=' ')
            seen.add(f)
        else:
            hits += 1
        count += 1
    print('\nGenerated %i, hits %i' %(count,hits))

REFINEMENT I am happy to get only the factorings with max 5 factors in the non-prime factor parts. 补充我很高兴在非质数部分中仅获得最多5个因数的因式分解。 I have found by hand that non-decreasing arrangements for up to 5 same factors follow this form: 我亲手发现,针对多达5个相同因素的非递减安排遵循以下形式:

partitions of 5    applied to 2**5
1  1  1   1  1     2  2   2   2  2
1  1  1     2      2  2   2    4
1  1  1  3         2  2      8
1   2       2      2    4      4 
1       4          2      16
  2      3           4       8

THE SOLUTION I do not remove the accepted answer from fine solution down, but it is over complicated for the job. 解决方案我并没有从公认的解决方案中删除可接受的答案,但是这项工作过于复杂。 From Project Euler I reveal only this helper function from orbifold of NZ, it works faster and without needing the prime factors first: 从欧拉计画中,我只揭示了来自新西兰Orbifold的此辅助功能,它的运行速度更快,并且不需要先考虑主要因素:

def factorings(n,k=2):
    result = []
    while k*k <= n:
        if n%k == 0:
            result.extend([[k]+f for f in factorings(n/k,k)])
        k += 1
    return result + [[n]]

The relevant solution for problem 88 of his run in Python 2.7 in 4.85 s by my timing decorator and after optimizing the stop condition by found counter 3.4 s in 2.6.6 with psyco , 3.7 s in 2.7 without psyco . 我的计时装饰器在4.85 s内于Python 2.7中运行问题88的相关解决方案,并通过在2.6.6中使用psyco找到了3.4 s,在没有psyco的2.7中找到了3.7 s,优化了停止条件。 Speed of my own code went from 30 seconds with code in accepted answer (sorting added by me removed) to 2.25 s (2.7 without psyco) and 782 ms with psyco in Python 2.6.6. 我自己的代码速度从接受答案的代码(删除我添加的排序)后的30秒提高到2.25 s(不使用psyco时为2.7)和python 2.6.6中使用psyco时为782 ms。

What you are looking for is more commonly called a divisor . 您正在寻找的东西通常被称为除数 This answers to this question may help you. 这个答案, 这个问题可以帮助你。

from __future__ import print_function
import itertools
import operator

def partition(iterable, chain=itertools.chain, map=map):
    # http://code.activestate.com/recipes/576795/
    # In [1]: list(partition('abcd'))
    # Out[1]: 
    # [['abcd'],
    #  ['a', 'bcd'],
    #  ['ab', 'cd'],
    #  ['abc', 'd'],
    #  ['a', 'b', 'cd'],
    #  ['a', 'bc', 'd'],
    #  ['ab', 'c', 'd'],
    #  ['a', 'b', 'c', 'd']]    
    s = iterable if hasattr(iterable, '__getslice__') else tuple(iterable)
    n = len(s)
    first, middle, last = [0], range(1, n), [n]
    getslice = s.__getslice__
    return [map(getslice, chain(first, div), chain(div, last))
            for i in range(n) for div in itertools.combinations(middle, i)]

def product(factors,mul=operator.mul):
    return reduce(mul,factors,1)

def factorings(factors,product=product,
               permutations=itertools.permutations,
               imap=itertools.imap,
               chain_from_iterable=itertools.chain.from_iterable,
               ):
    seen=set()
    seen.add(tuple([product(factors)]))
    for grouping in chain_from_iterable(
        imap(
            partition,
            set(permutations(factors,len(factors)))
            )):
        result=tuple(sorted(product(group) for group in grouping))
        if result in seen:
            continue
        else:
            seen.add(result)
            yield result

if __name__=='__main__':
    for f in factorings([2,2,3,3,5,7]):
        print(f,end=' ')

yields 产量

(3, 420) (9, 140) (28, 45) (14, 90) (2, 630) (3, 3, 140) (3, 15, 28) (3, 14, 30) (2, 3, 210) (5, 9, 28) (9, 10, 14) (2, 9, 70) (2, 14, 45) (2, 7, 90) (3, 3, 5, 28) (3, 3, 10, 14) (2, 3, 3, 70) (2, 3, 14, 15) (2, 3, 7, 30) (2, 5, 9, 14) (2, 7, 9, 10) (2, 2, 7, 45) (2, 3, 3, 5, 14) (2, 3, 3, 7, 10) (2, 2, 3, 7, 15) (2, 2, 5, 7, 9) (2, 2, 3, 3, 5, 7) (5, 252) (10, 126) (18, 70) (6, 210) (2, 5, 126) (5, 14, 18) (5, 6, 42) (7, 10, 18) (6, 10, 21) (2, 10, 63) (3, 6, 70) (2, 5, 7, 18) (2, 5, 6, 21) (2, 2, 5, 63) (3, 5, 6, 14) (2, 3, 5, 42) (3, 6, 7, 10) (2, 3, 10, 21) (2, 3, 5, 6, 7) (2, 2, 3, 5, 21) (4, 315) (20, 63) (2, 2, 315) (4, 5, 63) (4, 9, 35) (3, 4, 105) (7, 9, 20) (3, 20, 21) (2, 2, 9, 35) (2, 2, 3, 105) (4, 5, 7, 9) (3, 4, 5, 21) (3, 3, 4, 35) (3, 3, 7, 20) (2, 2, 3, 3, 35) (3, 3, 4, 5, 7) (7, 180) (3, 7, 60) (2, 18, 35) (2, 6, 105) (3, 10, 42) (2, 3, 6, 35) (15, 84) (12, 105) (3, 5, 84) (5, 12, 21) (7, 12, 15) (4, 15, 21) (2, 15, 42) (3, 5, 7, 12) (3, 4, 7, 15) (2, 6, 7, 15) (2, 2, 15, 21) (21, 60) (30, 42) (6, 7, 30) (5, 7, 36) (2, 21, 30) (5, 6, 6, 7) (3, 12, 35) (6, 14, 15) (4, 7, 45) (35, 36) (6, 6, 35)

I use a list like [(2, 9), (3, 3)] (for [2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3] ) as the base list of unexpanded factors and a list of expanded factors. 我使用[(2, 9), (3, 3)][2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3] )这样的列表作为基本列表未扩展因子和扩展因子列表。 In each round I pick one factor from the base, decreasing it's count, and 在每个回合中,我从基础中选择一个因素,以减少其数量,然后

  • add it to the expanded list, increasing it's length, so we have one additional factor in total (until the cufoff) 将其添加到扩展列表中,增加其长度,因此我们总共有一个附加因素(直到cufoff为止)
  • multiply it with all expanded factors, generating all possibilities 将其与所有扩展因子相乘,产生所有可能性

With dynamic programming and the cutoff strategy this became extremely fast: 通过动态编程和截止策略,这变得非常快:

from itertools import groupby

def multiplicies( factors ):
    """ [x,x,x,,y,y] -> [(x,3), (y,2)] """
    return ((k, sum(1 for _ in group)) for k, group in groupby(factors))

def combinate(facs, cutoff=None):
    facs = tuple(multiplicies(facs))

    results = set()
    def explode(base, expanded):
        # `k` is the key for the caching
        # if the function got called like this before return instantly
        k = (base, expanded)
        if k in results:
            return
        results.add(k)

        # pick a factor
        for (f,m) in base:
            # remove it from the bases
            newb = ((g, (n if g!=f else n-1)) for g,n in base)
            newb = tuple((g,x) for g,x in newb if x > 0)

            # do we cutoff yet?
            if cutoff is None or len(newb) + len(expanded) < cutoff:
                explode(newb, tuple(sorted(expanded + (f,))))

            # multiply the pick with each factor in expanded
            for pos in range(len(expanded)):
                newexp = list(expanded)
                newexp[pos] *= f
                explode(newb, tuple(sorted(newexp)))

    explode(facs, ())
    # turn the `k` (see above) into real factor lists
    return set((tuple((x**y) for x,y in bases) + expanded) 
        for (bases, expanded) in results)

# you dont even need the cutoff here!
combinate([2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3])
# but you need it for 
combinate([2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3,5,7,9,11], 5)

Try Psyco if you can (I cant because I only have Py2.7 here), it might speed this up quite a bit too. 如果可以,请尝试Psyco (我不能,因为我这里只有Py2.7),它也可能会大大加快速度。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM