简体   繁体   English

截断的核心数据NSData对象

[英]Truncated Core Data NSData objects

I am saving arrays of doubles in an NSData* object that is persisted as a binary property in a Core Data (SQLite) data model. 我将双精度数组保存在NSData *对象中,该对象作为二进制属性保留在核心数据(SQLite)数据模型中。 I am doing this to store sampled data for graphing in an iPhone app. 我这样做是为了将采样数据存储在iPhone应用程序中。 Sometimes when there are more than 300 doubles in the binary object not all the doubles are getting saved to disk. 有时,当二进制对象中有300个以上的双精度数时,并不是所有的双精度数都会保存到磁盘中。 When I quit and relaunch my app there may be as few as 25 data points that have persisted or as many as 300. 当我退出并重新启动我的应用程序时,可能存在的持久数据点多达25个,或者多达300个。

Using NSSQLitePragmasOption with synchronous = FULL and this may be making a difference. 将NSSQLitePragmasOption与sync = FULL一起使用可能会有所不同。 It is hard to tell, as bug is intermittent. 由于错误是断断续续的,因此很难说。

Given the warnings about performance problems as a result of using synchronous = FULL, I am seeking advice and pointers. 给定由于使用sync = FULL而导致的有关性能问题的警告,我正在寻求建议和指示。

Thanks. 谢谢。

[[ Edit: here is code. [[ 编辑:这是代码。 ]] ]]

The (as yet unrealized) intent of -addToCache: is to add each new datum to the cache but only flush (fault?) Data object periodically. -addToCache:的目的(尚未实现)是将每个新数据添加到缓存中,但仅定期刷新(故障?)数据对象。

From Data.m 从Data.m

@dynamic dataSet; // NSData * attribute of Data entity

 - (void) addDatum:(double_t)datum
    {
    DLog(@"-[Data addDatum:%f]", datum);
    [self addToCache:datum];
    }

- (void) addToCache:(double_t)datum
    {
    if (cache == nil)
        {
        cache = [NSMutableData dataWithData:[self dataSet]];
        [cache retain];
        }
    [cache appendBytes:&datum length:sizeof(double_t)];
    DLog(@"-[Data addToCache:%f] ... [cache length] = %d; cache = %p", datum, [cache length], cache);
    [self flushCache];
    }

- (void) wrapup
    {
    DLog(@"-[Data wrapup]");
    [self flushCache];
    [cache release];
    cache = nil;
    DLog(@"[self isFault] = %@", [self isFault] ? @"YES" : @"NO"); // [self isFault] is always NO.
    }

- (void) flushCache
    {
    DLog(@"flushing cache to store");
    [self setDataSet:cache];
    DLog(@"-[Data flushCache:] [[self dataSet] length] = %d", [[self dataSet] length]);
    }

- (double*) bytes
    {
    return (double*)[[self dataSet] bytes];
    }

- (NSInteger) count
    {
    return [[self dataSet] length]/sizeof(double);
    }

- (void) dump
    {
    ALog(@"Dump Data");
    NSInteger numDataPoints = [self count];
    double *data = (double*)[self bytes];
    ALog(@"numDataPoints = %d", numDataPoints);
    for (int i = 0; i

I was trying to get behavior as if my Core Data entity could have an NSMutableData attribute. 我试图使行为好像我的核心数据实体可以具有NSMutableData属性。 To do this my NSManagedObject (called Data) had an NSData attribute and an NSMutableData ivar. 为此,我的NSManagedObject(称为Data)具有一个NSData属性一个NSMutableData ivar。 My app takes sample data from a sensor and appends each data point to the data set - this is why I needed this design. 我的应用程序从传感器获取样本数据,并将每个数据点附加到数据集-这就是为什么我需要这种设计。

On each new data point was appended to the NSMutableData and then the NSData attribute was set to the NSMutableData. 在每个新数据点上,将其附加到NSMutableData,然后将NSData属性设置为NSMutableData。

I suspect that because the NSData pointer wasn't changing (though its content was), that Core Data did not appreciate the amount of change. 我怀疑是因为NSData指针没有改变(尽管其内容有所改变),所以Core Data没有欣赏到改变的数量。 Calling -hasChanged on the NSManagedObjectContext showed that there had been changes, and calling -updatedObjects even listed the Data object as having changed. 在NSManagedObjectContext上调用-hasChanged表示已进行了更改,并且调用-updatedObjects甚至将Data对象列为已更改。 But the actual data that was being written seems to have been truncated (sometimes). 但是正在写入的实际数据似乎已被截断(有时)。

To work around this I changed things slightly. 为了解决这个问题,我做了些改动。 New data points are still appended to NSMutableData but NSData attribute is only set when sampling is completed. 新数据点仍会附加到NSMutableData上, 但是 NSData属性仅在采样完成时设置。 This means that there is a chance that a crash might result in truncated data - but for the most part this work around seems to have solved the problem. 这意味着崩溃可能会导致数据被截断-但是在大多数情况下,这种解决方法似乎已经解决了问题。

Caveat emptor: the bug was always intermittent, so it is possible that is still there - but just harder to manifest. 请注意:错误始终是断断续续的,因此可能仍然存在-但更难以体现。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM