简体   繁体   English

修复 OutOfMemoryErrors

[英]Fixing OutOfMemoryErrors

Hello everyone I am seeing a major uptick in crashes regarding memory leaks in our recent Android builds.大家好,我发现在我们最近的 Android 版本中,有关 memory 泄漏的崩溃大幅增加。 We have done some things to try to mitigate these issues, but still am seeing the same crashes in the latest release.我们已经做了一些事情来尝试缓解这些问题,但在最新版本中仍然看到同样的崩溃。

 Fatal Exception: java.lang.OutOfMemoryError
 Failed to allocate a 16 byte allocation with 1890136 free bytes and 1845KB until OOM, target footprint 201326592, growth limit 201326592; failed due to fragmentation (largest possible contiguous allocation 54788096 bytes)
 java.lang.Long.valueOf (Long.java:845)
 io.reactivex.internal.operators.observable.ObservableInterval$IntervalObserver.run (ObservableInterval.java:82)
 io.reactivex.Scheduler$PeriodicDirectTask.run (Scheduler.java:562)
 io.reactivex.Scheduler$Worker$PeriodicTask.run (Scheduler.java:509)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker$BooleanRunnable.run (ExecutorScheduler.java:288)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.run (ExecutorScheduler.java:253)
 java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1167)
 java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
 java.lang.Thread.run (Thread.java:923)
 Fatal Exception: java.lang.OutOfMemoryError
 Failed to allocate a 16 byte allocation with 1590248 free bytes and 1552KB until OOM, target footprint 201326592, growth limit 201326592; failed due to fragmentation (largest possible contiguous allocation 39845888 bytes)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.schedule (ExecutorScheduler.java:161)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.schedule (ExecutorScheduler.java:187)
 io.reactivex.Scheduler$Worker$PeriodicTask.run (Scheduler.java:531)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker$BooleanRunnable.run (ExecutorScheduler.java:288)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.run (ExecutorScheduler.java:253)
 java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1167)
 java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
 java.lang.Thread.run (Thread.java:923)
 Fatal Exception: java.lang.OutOfMemoryError
 Failed to allocate a 16 byte allocation with 1215008 free bytes and 1186KB until OOM, target footprint 201326592, growth limit 201326592; failed due to fragmentation (largest possible contiguous allocation 49020928 bytes)
 io.reactivex.internal.queue.MpscLinkedQueue.offer (MpscLinkedQueue.java:62)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.schedule (ExecutorScheduler.java:167)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.schedule (ExecutorScheduler.java:187)
 io.reactivex.Scheduler$Worker$PeriodicTask.run (Scheduler.java:531)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker$BooleanRunnable.run (ExecutorScheduler.java:288)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.run (ExecutorScheduler.java:253)
 java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1167)
 java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
 java.lang.Thread.run (Thread.java:923)

is there some framework change that is triggering these issues, is this application code that is causing this?是否存在触发这些问题的某些框架更改,是否是导致此问题的应用程序代码? what are some strategies to try to address crashes like the above?尝试解决上述崩溃的一些策略是什么?

Some other techniques to consider beyond the existing comments:除了现有评论之外,还需要考虑其他一些技术:

In field instrumentation:现场仪表:

Activity Patterns: If you have something that records user activity, look for apps that go a long time without crashing and apps that crash earlier and see if there are different actions performed by the user活动模式:如果您有记录用户活动的内容,请查找 go 长时间没有崩溃的应用程序和较早崩溃的应用程序,看看用户是否执行了不同的操作

Direct Memory Usage: Since you are not yet able to reproduce this on debug builds, you could record memory available just before and just after particular activities to help you narrow down where in the app this occurring.直接 Memory 用法:由于您还无法在调试版本中重现此问题,您可以记录在特定活动之前和之后可用的 memory,以帮助您缩小在应用程序中发生这种情况的位置。 You can access app available memory and then log it (if you can get the logs) or report it back through some analytics system.您可以访问可用的应用程序 memory ,然后记录它(如果您可以获得日志)或通过某些分析系统将其报告回来。

Local testing:本地测试:

(with Leak Canary or profiler) (使用 Leak Canary 或分析器)

There are often points in time that should come back to the same level of allocated memory: for instance if you go into a screen and come back out you may allocate some static items, but from 2nd use of the screen onwards you will want the memory to come back to a normal (quiescent) point.通常有一些时间点应该回到与分配的 memory 相同的水平:例如,如果您 go 进入屏幕并返回,您可能会分配一些 static 项目,但是从第二次使用屏幕开始,您将需要 memory回到正常(静止)点。 So stopping execution, forcing a GC, restarting execution and going through a workflow and then coming back to the home screen.所以停止执行,强制 GC,重新启动执行并完成工作流,然后返回主屏幕。 (again skipping the first time) Can be a good way to narrow down which workflow is leaving significant extra memory. (再次跳过第一次)可以是缩小哪个工作流留下大量额外 memory 的好方法。

It is unusual that the debug builds are not producing this effect, if you have a "friendly" end user reporting this issue, perhaps give them a debug build and ask them to support you by using it.调试版本没有产生这种效果是不寻常的,如果你有一个“友好的”最终用户报告这个问题,也许给他们一个调试版本并请求他们通过使用它来支持你。

In a debug environment you can also try to "make it worse" so, for example, go into and out of a screen or workflow 10 or 100 times (scripting for the 100 example).在调试环境中,您还可以尝试“让它变得更糟”,例如,go 进出屏幕或工作流 10 次或 100 次(针对 100 次示例编写脚本)。

You can't increase the heap size dynamically but you can request to use more by using.您不能动态增加堆大小,但可以通过使用请求使用更多。

android:largeHeap="true"

in the manifest.xml ,you can add in your manifest these lines it is working for some situations.manifest.xml中,您可以在清单中添加这些行,它适用于某些情况。

<application
    android:allowBackup="true"
    android:icon="@mipmap/ic_launcher"
    android:label="@string/app_name"
    android:largeHeap="true"
    android:supportsRtl="true"
    android:theme="@style/AppTheme">

Whether your application's processes should be created with a large Dalvik heap.您的应用程序进程是否应使用大型 Dalvik 堆创建。 This applies to all processes created for the application.这适用于为应用程序创建的所有进程。 It only applies to the first application loaded into a process;它仅适用于加载到进程中的第一个应用程序; if you're using a shared user ID to allow multiple applications to use a process, they all must use this option consistently or they will have unpredictable results.如果您使用共享用户 ID 来允许多个应用程序使用一个进程,则它们都必须一致地使用此选项,否则它们将产生不可预知的结果。 Most apps should not need this and should instead focus on reducing their overall memory usage for improved performance.大多数应用不需要这个,而是应该专注于减少整体 memory 使用以提高性能。 Enabling this also does not guarantee a fixed increase in available memory, because some devices are constrained by their total available memory.启用此功能也不能保证可用 memory 的固定增加,因为某些设备受其总可用 memory 的限制。

To query the available memory size at runtime, use the methods getMemoryClass() or getLargeMemoryClass().要在运行时查询可用的 memory 大小,请使用方法 getMemoryClass() 或 getLargeMemoryClass()。

Use Coroutines for long or heavy operations.使用Coroutines进行长时间或繁重的操作。 These crashes are coming from Rxjava .这些崩溃来自Rxjava Maybe you are not performing work accurately in that.也许您在这方面没有准确地执行工作。

I am just going to have a stab at where you could look, from the stack trace it looks like you are using a schedular to perform tasks.我只是想看看您可以查看的位置,从堆栈跟踪来看,您似乎正在使用计划执行任务。 My suspicion is that you are running multiple threads, and as each thread requires its own allocation of memory some thing to consider would be: Controlling the number of threads through a thread pool , this will cap the number of threads available and recycle threads instead of allocating new ones and potentially having a significant number of threads running at the same time.我怀疑您正在运行多个线程,并且由于每个线程都需要自己分配 memory ,因此需要考虑的一些事情是:通过线程池控制线程数,这将限制可用线程数并回收线程而不是分配新线程并可能同时运行大量线程。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM