简体   繁体   中英

Copy Data From S3 to Redshift [Precision issue in numeric data]

I am copying data from a text file into redshift table using the following command:-

COPY redshift_table_name FROM 's3://gamma-audit-calculation-output-ngr-data-json/2021/05/10/08/kinesis-calculation-output-ngr-data-1-2021-05-10-09-48-24-82ecea90-ef50-4907-82d7-8b162ca2b841' CREDENTIALS iam_role json 'auto';

Attaching the file present in the path specified.

The data present in file is:-

{"totalgross":6113.47,"totalnetpay":3661.6,"calculationtime":"05/10/2021 02:48:24 AM PDT","dynamicngrlaunched":true,"employeeid":"881448","totalanytimepaywithdrawals":6.62,"totalimputedincome":12.1,"paycheckdate":"2021-04-30","calculationtimeepochmillis":"1620640104258","ngr":0.60,"totalanytimepayrepayments":0.0,"otherrepayments":0.0,"payenddate":"2021-04-30","employeeid_calculationtimeepochmillis":"881448_1620640104258"}

The schema for my redshift table is:-

create table table_name ( employeeid varchar(65535), ngr numeric(17, 2), totalgross numeric(17, 2), totalnetpay numeric(17, 2), earningamount numeric(17, 2), totalimputedincome numeric(17, 2), totalanytimepaywithdrawals numeric(17, 2), totalanytimepayrepayments numeric(17, 2), dynamicngrlaunched boolean, paycheckdate varchar(65535), payenddate varchar(65535), calculationtime varchar(65535), otherRepayments numeric(17, 2), calculationtimeepochmillis bigint, employeeid_calculationtimeepochmillis varchar(65535) ) DISTKEY (employeeid) SORTKEY (calculationtimeepochmillis);

Here the problem I am facing is that the ngr value while getting saved to Redshift table changes to 0.59 instead of 0.60. How can this be possible?

(
  employeeid varchar(65535),
  ngr numeric(17, 2),
  totalgross numeric(17, 2),
  totalnetpay numeric(17, 2),
  earningamount numeric(17, 2),
  totalimputedincome numeric(17, 2),
  totalanytimepaywithdrawals numeric(17, 2),
  totalanytimepayrepayments numeric(17, 2),
  dynamicngrlaunched boolean,
  paycheckdate varchar(65535),
  payenddate varchar(65535),
  calculationtime varchar(65535),
  otherRepayments numeric(17, 2),
  calculationtimeepochmillis bigint,
  employeeid_calculationtimeepochmillis varchar(65535)
)
DISTKEY (employeeid)
SORTKEY (calculationtimeepochmillis);

Before getting onto anything else, I would advise you in the strongest possible terms NOT to use maximum length varchar . Last I knew, when rows are brought into memory, they use an amount of memory equal to their maximum length, as specified in the DDL. You have five varchar(65535) , so one row of your table is using 320 kilobytes of memory.

Remember the available memory is divided up into queues and slots and then across slices, so you may have really not very much memory available - it could vary hugely but it could well be something like 100mb in total - and if you're going to do hash joins, you need to ensure the smaller table in the hash join can when hashed fit into memory, or performance will go to hell. If you have a query running, it will need memory for other things, so if you do have say 100mb, you might have say at most half available for your hash, and 50mb when you have 320kb rows gives you a maximum of about one hundred and fifty rows in your table . You can of course blow right through this - Redshift won't stop you, it won't warn you in any way - but performance will go to hell and you'll have no idea why.

Also be careful with your numerics not to go beyond a precision of 19. When precision is 19 or less, numeric is eight bytes, but when 20 or more, numeric becomes sixteen bytes (regardless of the value you actually store) and has to be processed by a math library, rathar than directly by the processor hardware.

Also, remember to use NOT NULL where possible, since it reduces the size of a column. This is particulaly important for boolean , which is one bit per value when NOT NULL , but two bits per value when NULL , and for varchar , as being NULL adds one byte to the size of data stored for a string.

Finally, you're not setting any encodings. Redshift will choose them for you, but it does a terrible job of picking encodings. I would strongly advise you to pick your own encodings.

Now, on to your problem.

Here the problem I am facing is that the ngr value while getting saved to Redshift table changes to 0.59 instead of 0.60. How can this be possible?

I may be wrong, I'd need to test to check, but I might guess the number is being read first as a float, and then converted to a numeric.

Integers (which is what numeric is, under the hood) and floating point numbers behave differently.

Integers are exact. Floating point numbers are not. By this I mean to say that when you store an integer, you will get back, always, exactly the number you stored. This is not the case with floating point numbers. If you imagine the continuum of numbers between the smallest and largest floating point number as a picket fence, so you have the fence which consists every now and then of a post which goes into the earth, only the numbers at the posts can be stored; so when you store a number, it is converted to the nearest storable number , and that is what is stored, and that's what you get back.

So when you store 0.60, there is no "post" at 0.60 - the nearest is at 0.59, and so 0.59 is what is stored, and that's what you get back when you read the number.

If you want the number to be exact, you could multiply your numbers by powers of 10, so that the fractional part is always zero, and then store them as integers. So in your case with 0.59, if I assume all your number have two decimal places of fractional part, multiply your numbers 100, so 0.59 becomes 59, and then store 59 as an integer. Do all your math using integers, and then finally convert back to floating point at the very last stage.

There is a famous white paper by David Goldberg, "What Every Computer Scientist Should Know About Floating-Point Arithmetic", which explains the issue;

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

I think user15782476's response describes perfectly what's the root cause of the problem. It helped me a lot to understand what is happening, as I was struggling with the same issue. I've found a workaround for the problem though, that might be helpful. In order to workaround the problem that Redshift is truncating the fractional digits instead of rounding them you need to slightly adjust the value. Eg you want 0.60 to be persisted, then write 0.601 into the datafile. This results a floating value 0.600999 instead of 0.599999 , and that will be truncated correctly to 0.60 . Here's a Java method using reflection to adjust all BigDecimal field values with a calculated correction (eg for 0.60 that has scale = 2, it would add 0.001)

private <T> T adjustNumericFields(T entity) {  
  for (Field f : entity.getClass().getDeclaredFields()) {
    if (f.getType().equals(BigDecimal.class)) {
      try {
        f.setAccessible(true);
        BigDecimal bigDecimal = (BigDecimal) f.get(entity);
        if (bigDecimal != null) {
          int scale = bigDecimal.scale();
          BigDecimal correction = BigDecimal.ONE.movePointLeft(scale + 1);
          f.set(entity, bigDecimal.add(correction));
        }
      } catch (Exception e) {
        log.warn("Could not adjust value: {}/{} - {}", entity.getClass().getSimpleName(), f.getName(), e.getMessage());
      }
    }
  }
  return entity;
}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM