[英]Is it faster to use Predicates to filter a Concurrent Map or a List, using parallelStream?
[英]Mapping list of objects using parallelStream with DozerMaper gives StackOverflowError
我有以下实用程序方法将域对象的列表映射到DTO,从而生成映射对象的列表。
public static <Z, T> List<T> mapList(Mapper mapper, List<Z> source, Class<T> type) {
List<T> result = new ArrayList<T>();
int listSize = source.size();
for (int i=0;i<listSize;i++) {
result.add(mapper.map(source.get(i), type));
}
return result;
}
作为映射器,我传递了DozerBeanMaper的单例实例(由Spring管理的实例)。 列表源是休眠查询的结果。 上面的代码工作正常。
现在,我已更改代码以使用Stream API(我想并行化映射):
public static <Z, T> List<T> mapList(Mapper mapper, List<Z> source, Class<T> type) {
return source.parallelStream()
.map((s) -> mapper.map(s, type))
.collect(Collectors.toList());
}
并获得以下内容:
Caused by: java.lang.NullPointerException
at org.hibernate.engine.internal.StatefulPersistenceContext.getLoadedCollectionOwnerOrNull(StatefulPersistenceContext.java:755)
at org.hibernate.event.spi.AbstractCollectionEvent.getLoadedOwnerOrNull(AbstractCollectionEvent.java:75)
at org.hibernate.event.spi.InitializeCollectionEvent.<init>(InitializeCollectionEvent.java:36)
at org.hibernate.internal.SessionImpl.initializeCollection(SessionImpl.java:1895)
at org.hibernate.collection.internal.AbstractPersistentCollection$4.doWork(AbstractPersistentCollection.java:558)
at org.hibernate.collection.internal.AbstractPersistentCollection.withTemporarySessionIfNeeded(AbstractPersistentCollection.java:260)
at org.hibernate.collection.internal.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:554)
at org.hibernate.collection.internal.AbstractPersistentCollection.read(AbstractPersistentCollection.java:142)
at org.hibernate.collection.internal.PersistentSet.iterator(PersistentSet.java:180)
at org.dozer.MappingProcessor.addOrUpdateToList(MappingProcessor.java:766)
at org.dozer.MappingProcessor.addOrUpdateToList(MappingProcessor.java:850)
at org.dozer.MappingProcessor.mapListToList(MappingProcessor.java:686)
at org.dozer.MappingProcessor.mapCollection(MappingProcessor.java:553)
at org.dozer.MappingProcessor.mapOrRecurseObject(MappingProcessor.java:434)
at org.dozer.MappingProcessor.mapFromFieldMap(MappingProcessor.java:342)
at org.dozer.MappingProcessor.mapField(MappingProcessor.java:288)
at org.dozer.MappingProcessor.map(MappingProcessor.java:248)
at org.dozer.MappingProcessor.map(MappingProcessor.java:197)
at org.dozer.MappingProcessor.mapCustomObject(MappingProcessor.java:495)
at org.dozer.MappingProcessor.mapOrRecurseObject(MappingProcessor.java:446)
at org.dozer.MappingProcessor.mapFromFieldMap(MappingProcessor.java:342)
at org.dozer.MappingProcessor.mapField(MappingProcessor.java:288)
at org.dozer.MappingProcessor.map(MappingProcessor.java:248)
at org.dozer.MappingProcessor.map(MappingProcessor.java:197)
at org.dozer.MappingProcessor.map(MappingProcessor.java:187)
at org.dozer.MappingProcessor.map(MappingProcessor.java:124)
at org.dozer.MappingProcessor.map(MappingProcessor.java:119)
at org.dozer.DozerBeanMapper.map(DozerBeanMapper.java:120)
at
org.mycompany.myproject.utils.BeanMapperUtil.lambda$0(BeanMapperUtil.java:30)
执行重复自身,最后变成StackOverFlowErorr。
如果我使用source.stream()
而不是source.parallelStream()
,则不会出现任何错误。
有任何想法吗?
问题在于该方法是在Spring @Transactional
注释的方法中与延迟加载一起调用的。 见例如这个职位] 1
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.