简体   繁体   中英

Is C# compiler not reporting all errors at once at each compile?

When I am compiling this project, it show like 400+ errors in the Error List window, then I go to error sites, fix some, and the number goes to say 120+ errors, and then after fixing some more, the next compile reports like 400+ again. I can see that different files are coming in in the Error List window, so I am thinking the compiler aborts after a certain number of errors?

If so, what's the reason for this? Is it not supposed to gather all the errors that are present in the project even if they are over 10K+?

I've been meaning to write a blog article about this.

It is possible that you're simply running into some hard-coded limit for the number of errors reported. It's also possible that you're running into a more subtle and interesting scenario.

There are a lot of heuristics in the command-line compiler and the IDE compiler that attempt to manage error reporting. Both to keep it manageable for the user, and to make the compiler more robust.

Briefly, the way the compiler works is it tries to get the program through a series of stages, which you can read about here:

http://blogs.msdn.com/b/ericlippert/archive/2010/02/04/how-many-passes.aspx

The idea is that if an early stage gets an error, we might not be able to successfully complete a later stage without (1) going into an infinite loop, (2) crashing, or (3) reporting crazy "cascading" errors. So what happens is, you get one error, you fix it, and then suddenly the next stage of compilation can run, and it finds a bunch more errors.

Basically, if the program is so messed up that we cannot even verify basic facts about its classes and methods, then we can't reliably give errors for method bodies. If we can't analyze a lambda body, then we can't reliably give errors for its conversion to an expression tree. And so on; there are lots of situations where later stages need to know that the previous stages completed without errors.

The up side of this design is that (1) you get the errors that are the most "fundamental" first, without a lot of noisy, crazy cascading errors, and (2) the compiler is more robust because it doesn't have to try to do analysis on programs where the basic invariants of the language are broken. The down side is of course your scenario: that you have fifty errors, you fix them all, and suddenly fifty more appear.

Of course it will stop at some point.

Even after 1 error, all the rest is dubious at best. A compiler will try to recuperate, but that's not guaranteed to succeed.

So in any non-trivial project, it's a practical decision between stopping at the first error (theoretically the best thing to do) and ploughing on in an unreliable state.

The most correct action would be to stop after 1 error, but that would lead to a tedious 'fix 1 at a time' situation. So a compiler tries to resync to a known state and report the next one. But an error could cause false errors in correct code following it, so at some point it stops being sensible.

Refer to your own case: 400+ goes to 120 after a few fixes.

It's configurable according to MSDN

By default, the maximum number is 200 errors and warnings.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM