简体   繁体   中英

F# execution speed difference between visual studio and the command line generation

There is a significant difference in execution time between a program built with F# visual studio (community 2019) and the F# command line (both using F# 4.7).My question: why is there this difference?

I am using Windows 10 home version 1809 (recent). The program mainly uses bigintegers in a Pollard rho factoring algorithm (program below). For visual studio I used a console project.

The elapsed time for visual studio is 28 seconds, and 39 seconds for the command line.

I am using a release binary for the x64 target on both. I have tried many fsc compile command line options (--debug- --optimize+ --standalone), without any appreciable difference.

The command line compile output is

 7168 Nov  2 16:14 rho.exe

for the command line

fsc rho.fs

As mentioned above, adding command line options does not make much difference.

The visual studio output is

 10752 Nov  2 09:12 rho0.dll*
159744 Nov  2 09:12 rho0.exe*

So the outputs are very different. rho and rho0 are the same source.

Both versions produce the same answer, but with a significant elapsed time difference. Why?

The program is:

open System
open System.Diagnostics
open System.Numerics

type Z = System.Numerics.BigInteger

let rho n maxIter c1 =
  let mutable iter = 1
  let mutable prod = 1I
  let mutable x    = 2I
  let mutable y    = 11I
  let mutable gcd  = 0I
  let mutable solution = false

  let stopWatch = Stopwatch();
  stopWatch.Start()

  while not solution do
    x <- (x * x + c1) % n;
    y <- (y * y + c1) % n;
    y <- (y * y + c1) % n;
    prod <- ((y - x) * prod) % n;
    if (iter % 150 = 0) 
    then
      gcd <- Z.GreatestCommonDivisor (n, prod)
      if (gcd <> 1I) then
        stopWatch.Stop()
        printfn "rho c1 = %A" c1
        printfn "factor, iterations = %A, %A" gcd iter
        printfn "elpased time = %A" stopWatch.ElapsedMilliseconds
        solution <- true
      else
        prod <- 1I
        iter <- iter+1
    else
      iter <- iter+1
  if (not solution) then
    printfn "no solution, iterations = %A" iter
  else printfn "solution"


let n = Z.Pow(2I,257) - 1I
let maxIter = 30000000
printfn "calling rho"
let result = rho n maxIter 7I

Update 11/4/2019:

From the command line I built a project using.Net core (instructions at https://docs.microsoft.com/en-us/dotnet/fsharp/get-started/get-started-command-line )

The application ran in 28 seconds. So it appears that when you use fsc on the command line, it uses .NET Framework, but if you make a command line project with .NET Core, the run time is significantly reduced. The default for Visual Studio console application is to use.Net Core.

In VS if I change the framework from .NET Core to .NET Framework, the run time increases to 39 seconds.

I took your example code and ran it through the different configurations:

.NET Core from VS Release x64 - 37214 ms with (9171, 3, 0) CC
.NET Core from VS Release x86 - 69903 ms with (7673, 6, 0) CC
.NET Core from VS Release Any - 35694 ms with (9171, 3, 0) CC

.NET Core using EXE Release x64 - 37995 ms with (9171, 3, 0) CC
.NET Core using EXE Release x86 - 72489 ms with (7673, 7, 0) CC
.NET Core using EXE Release Any - 36106 ms with (9171, 3, 0) CC

.NET Framework 4.7.2 from VS Release x64 - 49697 ms with (5935, 4, 0) CC
.NET Framework 4.7.2 from VS Release x86 - 81324 ms with (4945, 8, 0) CC
.NET Framework 4.7.2 from VS Release Any - 80521 ms with (4945, 8, 0) CC

.NET Framework 4.7.2 using EXE Release x64 - 49450 ms with (5935, 4, 0) CC
.NET Framework 4.7.2 using EXE Release x86 - 80418 ms with (4945, 8, 0) CC
.NET Framework 4.7.2 using EXE Release Any - 80458 ms with (4945, 8, 0) CC

.NET Core using dotnet run x64 - 37614 ms with (9171, 3, 0) CC
.NET Core using dotnet run no tiered compilation x64 - 37186 ms with (9171, 3, 0) CC

From this it seemed that there is a quite a big difference between x86 and x64 . Have you tried forcing x64 mode in both VS and from command line. Dlls can be compiled to Any but still have a preference for x86.

In addition; from my tests it seems that .NET Core did perform better than .NET Framework 4.72.

I tried disabling tiered compilation in .NET Core as well as that occassionally can cause performance issues but couldn't spot any real difference.

I realize this is frowned upon in StackOverflow but as project configurations can be tricky to put into a post I decided to put the example code I used here: https://github.com/mrange/CodeStack/tree/master/q58675873/FsPerfSo

OP can see if it's possible to reproduce my numbers on OPs machine.

So from my perspective .NET Core looks better than .NET Framework but why?

Looking at the code through dnSpy I can't tell of significant differences between the OP code for .NET Core or .NET Framework. However, looking at the System.Numerics dependencies I can spot some pretty significant differences between .NET Core and .NET Framework version of System.Numerics .

The version is newer in .NET Core for System.Numerics but it's not certain they follow the same versioning ( 4.1.2.0 for .NET Core and 4.0.0.0 for .NET Framework).

So bottom line, from my perspective, make sure to use x64 and .NET Core.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM