简体   繁体   中英

Joining strings. Generator or list comprehension?

Consider the problem of extracting alphabets from a huge string.

One way to do is

''.join([c for c in hugestring if c.isalpha()])

The mechanism is clear: The list comprehension generates a list of characters. The join method knows how many characters it needs to join by accessing the length of the list.

Other way to do is

''.join(c for c in hugestring if c.isalpha())

Here the generator comprehension results in a generator. The join method does not know how many characters it is going to join because the generator does not possess len attribute. So this way of joining should be slower than the list comprehension method.

But testing in python shows that it is not slower. Why is this so? Can anyone explain how join works on a generator.

To be clear:

sum(j for j in range(100))

doesn't need to have any knowledge of 100 because it can keep track of the cumulative sum. It can access the next element using the next method on the generator and then add to the cumulative sum. However, since strings are immutable, joining strings cumulatively would create a new string in each iteration. So this would take lot of time.

When you call str.join(gen) where gen is a generator, Python does the equivalent of list(gen) before going on to examine the length of the resulting sequence.

Specifically, if you look at the code implementing str.join in CPython , you'll see this call:

    fseq = PySequence_Fast(seq, "can only join an iterable");

The call to PySequence_Fast converts the seq argument into a list if it wasn't a list or tuple already.

So, the two versions of your call are handled almost identically. In the list comprehension, you're building the list yourself and passing it into join . In the generator expression version, the generator object you pass in gets turned into a list right at the start of join , and the rest of the code operates the same for both versions..

join() does not need to be implemented as a sequential appending of elements of the sequence to a longer and longer accumulated string (which would indeed be very slow for long sequences); it just needs to produce the same result. So join() is probably just appending characters to some internal memory buffer, and creating a string from it at the end. The list comprehension construct, on the other hand, needs to first construct the list (by traversing hugestring 's generator), and only then let join() begin its work.

Also, I doubt that join() looks at the list's length, since it can't know that each element is a single character (in most cases, it won't be) - it probably just obtains a generator from the list.

At least on my machine, the list comprehension is faster for the case I tested, likely due to ''.join being able to optimize the memory allocation. It likely just depends on the specific example you're testing (eg, if the condition you're testing occurs less frequently, the price CPython pays for not knowing length ahead of time may be smaller):

In [18]: s = ''.join(np.random.choice(list(string.printable), 1000000))

In [19]: %timeit ''.join(c for c in s if c.isalpha())
10 loops, best of 3: 69.1 ms per loop

In [20]: %timeit ''.join([c for c in s if c.isalpha()])
10 loops, best of 3: 61.8 ms per loop

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM