[英]Why is this Julia snippet so much slower than the Python equivalent? (with dictionaries)
I have the following code in Python Jupyter:我在 Python Jupyter 中有以下代码:
n = 10**7
d = {}
%timeit for i in range(n): d[i] = i
%timeit for i in range(n): _ = d[i]
%timeit d[10]
with the following times:以下时间:
763 ms ± 19.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
692 ms ± 3.74 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
39.5 ns ± 0.186 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
and this on Julia这在 Julia
using BenchmarkTools
d = Dict{Int64, Int64}()
n = 10^7
r = 1:n
@btime begin
for i in r
d[i] = i
end
end
@btime begin
for i in r
_ = d[i]
end
end
@btime d[10]
with times:随着时间:
2.951 s (29999490 allocations: 610.34 MiB)
3.327 s (39998979 allocations: 762.92 MiB)
20.163 ns (0 allocations: 0 bytes)
What I am not quite able to understand is why the Julia one seems to be so much slower in dictionary value assignation and retrieval in a loop (first two tests), but at the same time is so much faster in single key retrieval (last test).我不太明白的是为什么 Julia 在循环中的字典值分配和检索(前两个测试)中似乎要慢得多,但同时在单键检索中要快得多(最后一个测试)。 It seems to be 4 times slower when in a loop, but twice as fast if not in a loop.
在循环中似乎慢了 4 倍,但如果不在循环中则快两倍。 I'm new to Julia, so I am not sure if I am doing something un-optimal or if this is somehow expected.
我是 Julia 的新手,所以我不确定我是否在做一些不理想的事情,或者这是否是预期的。
Since you are benchmarking in a top-level scope you have to interpolate variables in @btime
with $
so the way to benchmark your code is:由于您在顶级 scope 中进行基准测试,因此您必须在
@btime
中使用$
插入变量,因此对代码进行基准测试的方法是:
julia> using BenchmarkTools
julia> d = Dict{Int64, Int64}()
Dict{Int64, Int64}()
julia> n = 10^7
10000000
julia> r = 1:n
1:10000000
julia> @btime begin
for i in $r
$d[i] = i
end
end
842.891 ms (0 allocations: 0 bytes)
julia> @btime begin
for i in $r
_ = $d[i]
end
end
618.808 ms (0 allocations: 0 bytes)
julia> @btime $d[10]
6.300 ns (0 allocations: 0 bytes)
10
Timing for Python 3 on the same machine in Jupyter Notebook is:在 Jupyter Notebook 的同一台机器上 Python 3 的时序是:
n = int(10.0**7)
d = {}
%timeit for i in range(n): d[i] = i
913 ms ± 87.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit for i in range(n): _ = d[i]
816 ms ± 92.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit d[10]
50.2 ns ± 2.97 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
However, for the first operation I assume you rather wanted to benchmark this:但是,对于第一个操作,我假设您更愿意对此进行基准测试:
julia> function f(n)
d = Dict{Int64, Int64}()
for i in 1:n
d[i] = i
end
end
f (generic function with 1 method)
julia> @btime f($n)
1.069 s (72 allocations: 541.17 MiB)
against this:反对这一点:
def f(n):
d = {}
for i in range(n):
d[i] = i
%timeit f(n)
1.18 s ± 65.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
It should also be noted that using a specific value of n
can be misleading as Julia and Python are not guaranteed to resize their collections at the same moments and to the same new sizes (in order to store a dictionary you normally allocate more memory than needed to avoid hash collisions and here actually a specific value of tested n
might matter). It should also be noted that using a specific value of
n
can be misleading as Julia and Python are not guaranteed to resize their collections at the same moments and to the same new sizes (in order to store a dictionary you normally allocate more memory than needed为了避免 hash 冲突,这里实际上测试n
的特定值可能很重要)。
Note that if I declare global variables as const
all is fast, as then the compiler can optimize the code (it knows the types of the values bound to global variables cannot change);请注意,如果我将全局变量声明为
const
all 很快,那么编译器可以优化代码(它知道绑定到全局变量的值的类型不能改变); therefore then using $
is not needed:因此不需要使用
$
:
julia> using BenchmarkTools
julia> const d = Dict{Int64, Int64}()
Dict{Int64, Int64}()
julia> const n = 10^7
10000000
julia> const r = 1:n
1:10000000
julia> @btime begin
for i in r
d[i] = i
end
end
895.788 ms (0 allocations: 0 bytes)
julia> @btime begin
for i in $r
_ = $d[i]
end
end
582.214 ms (0 allocations: 0 bytes)
julia> @btime $d[10]
6.800 ns (0 allocations: 0 bytes)
10
If you are curious what are the benefits of having a native support for threading here is a simple benchmark (this functionality is a part of the language):如果你很好奇,这里有一个对线程的原生支持有什么好处是一个简单的基准(这个功能是语言的一部分):
julia> Threads.nthreads()
4
julia> @btime begin
Threads.@threads for i in $r
_ = $d[i]
end
end
215.461 ms (23 allocations: 2.17 KiB)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.