简体   繁体   中英

Compile C#, so that it runs with the speed of C++

Alright, so I wanted to ask if it's actually possible to make a parser from c# to c++.
So that code written in C# would be able to run as fast as code written in C++.
Is it actually possible to do? I'm not asking how hard is it going to be.

What makes you think that translating your C# code to C++ would magically make it faster?

Languages don't have a speed. Assuming that C# code is slower (I'll get back to that), it is because of what that code does (including the implicit requirements placed by C#, such as bounds checking on arrays), and not because of the language it is written in.

If you converted your C# code to C++, it would still need to do bounds checking on arrays, because the original source code expected this to happen, so it would have to do just as much work.

Moreover, C# often isn't slower than C++. There are plenty of benchmarks floating around on the internet, generally showing that for the most part, C# is as fast as (or faster than) C++. Only when you spend a lot of time optimizing your code, does C++ become faster.

If you want faster code, you need to write code that requires less work to execute, not try to change the source language. That's just cargo-cult programming at its worst. You once saw some efficient code, and that was written in C++, so now you try to make things C++, in the hope of attracting any efficiency that might be passing by.

It just doesn't work that way.

Although you could translate C# code to C++, there would be the issue that C# depends on the .Net framework libraries which are not native, so you could not simply translate C# code to C++.

Update

Also C# code depends on the runtime to do things such as memory management ie Garbage Collection. If you translated the C# code to C++, where would the memory management code be? Parsing and translating is not going to fix issues like that.

The Mono project has invested quite a lot of energy in turning LLVM into a native machine code compiler for the C# runtime , although there are some problems with specific language constructs like shared generics etc. . Check it out and take it for a spin.

您可以使用NGen将IL编译为本机代码

Performance related tweaks:

Platform independent

  • use a profiler to spot the bottlenecks;

    • prevent unnecessary garbage (spot it using generation #0 collect count and the Large Object heap)
    • prevent unnecessary copying (use struct wisely)
    • prevent unwarranted generics (code-sharing has unexpected performance side effects)
    • prefer oldfashioned loops over enumerator blocks when performance is an issue
    • When using LINQ watch closely where you maintain/break deferred evaluation. Both can be enormous boosts to performance
  • use reflection.emit/Expression Trees to precompile certain dynamic logic that is performance bottleneck

Mono

  • use Mono --gc=sgen --optimize=inline,... (the SGEN garbage collector can make orders of magnitude difference). See also man mono for a lot of tuning/optimization options
  • use MONO_GENERIC_SHARING=none to disable sharing of generics (making particular tasks a lot quicker especially when supporting both valuetypes and reftypes) ( not recommended for regular production use )
  • use the -optimize+ compile flag (optimizing the CLR code independently from what the JITter may do with that)

Less mainstream:

MS .NET

Most of the above have direct Microsoft pendants ( NGen , `/Optimize' etc.)

Of course MS don't have a switchable/tunable garbage collector, and I don't think a fully compiled native binary can be achieved like with mono.

As always the answer to making code run faster is:

Find the bottleneck and optimize that

Most of the time the bottleneck is either:

time spend in a critical loop
Review your algorithm and datastructure, do not change the language, the latter will give a 10% speedup, the first will give you a 1000x speedup.
If you're stuck on the best algorithm, you can always ask a specific, short and detailed question on SO.

time waiting for resources for a slow source
Reduce the amount of stuff you're requesting from the source
instead of:

SELECT * FROM bigtable 

do

SELECT TOP 10 * FROM bigtable ORDER BY xxx

The latter will return instantly and you cannot show a million records in a meaningful way anyhow.
Or you can have the server at the order end reduce the data so that it doesn't take a 100 years to cross the network.

Alternativly you can execute the slow datafetch routine in a separate thread, so the rest of your program can do meaningful stuff instead of waiting.

Time spend because you are overflowing memory with Gigabytes of data
Use a different algorithm that works on a smaller dataset at a time.
Try to optimize cache usage.

The answer to efficient coding is measure where your coding time goes
Use a profiler.
see: http://csharp-source.net/open-source/profilers

And optimize those parts that eat more than 50% of your CPU time.
Do this for a number of iterations, and soon your 10 hour running time will be down to a manageable 3 minutes, instead of the 9.5 hours that you will get from switching to this or that better language.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM