简体   繁体   中英

Unexpected behavior from cin when overflowing int

All, I've got some code here that I can't explain the behavior of. It is posted below. I looked at Why does integer overflow cause errors with C++ iostreams? , but it doesn't really answer my question.

#include <iostream>
#include<stdio.h>
using namespace std;
int main()
{
    int x;
    scanf("%d", &x);
    cout << "Value of x = " << x << endl;
    cin >> x;
    cout << "Failure Detected = " << cin.fail() << endl;
    cout << "Value of x = " << x << endl;
    return 0;
}

So, what I expect this code to do is read in an integer, print out the value of that integer, read in another integer (into the same variable), and print out that integer. If I enter input of 7 and 2, then it works as expected. However, if I enter 2^31 (overflow int by one) for both the first and second input, then the first output will say "Value of x = -2147483648" and the second output will say "Value of x = 2147483647". cin.fail() will also return true. What is cin doing to the input? I thought that if cin.fail() was true, the value of x should be left unaffected. If not left unaffected, I would expect the value of x to overflow as normal (like scanf does). What's going on with cin here? Why is it capping the value at integer max value?

Thanks in advance!

In C++98 the variable was unchanged when input failed. This was a disadvantage if you try to input to an uninitialized variable.

For example:

int a;
cin >> a;
cout << a;    // UB if input failed!

In later standards the variable will be set to the largest or smallest value possible when the input is outside of that range.


For operator>>(int& val) the standard says [istream.formatted.arithmetic]:

The conversion occurs as if performed by the following code fragment (using the same notation as for the preceding code fragment):

typedef num_get<charT,istreambuf_iterator<charT,traits> > numget;
iostate err = ios_base::goodbit;
long lval;
use_facet<numget>(loc).get(*this, 0, *this, err, lval);
if (lval < numeric_limits<int>::min()) {
  err |= ios_base::failbit;
  val = numeric_limits<int>::min();
} else if (numeric_limits<int>::max() < lval) {
  err |= ios_base::failbit;
  val = numeric_limits<int>::max();
} else
  val = static_cast<int>(lval);
setstate(err);
  1. Your scanf : The behaviour on overflowing a signed integral type in C++ is undefined . It's rather pointless to speculate on what is happening under the hood. "Overflow as normal" is particularly meaningless.

  2. Your cin : Pre C++03, the x would not have been changed if it could not accommodate the input. So the behaviour of a subsequent cout would have been undefined since you'd be reading back an uninitialised variable. From C++03 onwards, x is capped (or floored) at it largest (or smallest) value if its range would be exceeded. That is what is happening in your second case.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM