简体   繁体   中英

Fixing OutOfMemoryErrors

Hello everyone I am seeing a major uptick in crashes regarding memory leaks in our recent Android builds. We have done some things to try to mitigate these issues, but still am seeing the same crashes in the latest release.

 Fatal Exception: java.lang.OutOfMemoryError
 Failed to allocate a 16 byte allocation with 1890136 free bytes and 1845KB until OOM, target footprint 201326592, growth limit 201326592; failed due to fragmentation (largest possible contiguous allocation 54788096 bytes)
 java.lang.Long.valueOf (Long.java:845)
 io.reactivex.internal.operators.observable.ObservableInterval$IntervalObserver.run (ObservableInterval.java:82)
 io.reactivex.Scheduler$PeriodicDirectTask.run (Scheduler.java:562)
 io.reactivex.Scheduler$Worker$PeriodicTask.run (Scheduler.java:509)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker$BooleanRunnable.run (ExecutorScheduler.java:288)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.run (ExecutorScheduler.java:253)
 java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1167)
 java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
 java.lang.Thread.run (Thread.java:923)
 Fatal Exception: java.lang.OutOfMemoryError
 Failed to allocate a 16 byte allocation with 1590248 free bytes and 1552KB until OOM, target footprint 201326592, growth limit 201326592; failed due to fragmentation (largest possible contiguous allocation 39845888 bytes)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.schedule (ExecutorScheduler.java:161)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.schedule (ExecutorScheduler.java:187)
 io.reactivex.Scheduler$Worker$PeriodicTask.run (Scheduler.java:531)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker$BooleanRunnable.run (ExecutorScheduler.java:288)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.run (ExecutorScheduler.java:253)
 java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1167)
 java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
 java.lang.Thread.run (Thread.java:923)
 Fatal Exception: java.lang.OutOfMemoryError
 Failed to allocate a 16 byte allocation with 1215008 free bytes and 1186KB until OOM, target footprint 201326592, growth limit 201326592; failed due to fragmentation (largest possible contiguous allocation 49020928 bytes)
 io.reactivex.internal.queue.MpscLinkedQueue.offer (MpscLinkedQueue.java:62)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.schedule (ExecutorScheduler.java:167)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.schedule (ExecutorScheduler.java:187)
 io.reactivex.Scheduler$Worker$PeriodicTask.run (Scheduler.java:531)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker$BooleanRunnable.run (ExecutorScheduler.java:288)
 io.reactivex.internal.schedulers.ExecutorScheduler$ExecutorWorker.run (ExecutorScheduler.java:253)
 java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1167)
 java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:641)
 java.lang.Thread.run (Thread.java:923)

is there some framework change that is triggering these issues, is this application code that is causing this? what are some strategies to try to address crashes like the above?

Some other techniques to consider beyond the existing comments:

In field instrumentation:

Activity Patterns: If you have something that records user activity, look for apps that go a long time without crashing and apps that crash earlier and see if there are different actions performed by the user

Direct Memory Usage: Since you are not yet able to reproduce this on debug builds, you could record memory available just before and just after particular activities to help you narrow down where in the app this occurring. You can access app available memory and then log it (if you can get the logs) or report it back through some analytics system.

Local testing:

(with Leak Canary or profiler)

There are often points in time that should come back to the same level of allocated memory: for instance if you go into a screen and come back out you may allocate some static items, but from 2nd use of the screen onwards you will want the memory to come back to a normal (quiescent) point. So stopping execution, forcing a GC, restarting execution and going through a workflow and then coming back to the home screen. (again skipping the first time) Can be a good way to narrow down which workflow is leaving significant extra memory.

It is unusual that the debug builds are not producing this effect, if you have a "friendly" end user reporting this issue, perhaps give them a debug build and ask them to support you by using it.

In a debug environment you can also try to "make it worse" so, for example, go into and out of a screen or workflow 10 or 100 times (scripting for the 100 example).

You can't increase the heap size dynamically but you can request to use more by using.

android:largeHeap="true"

in the manifest.xml ,you can add in your manifest these lines it is working for some situations.

<application
    android:allowBackup="true"
    android:icon="@mipmap/ic_launcher"
    android:label="@string/app_name"
    android:largeHeap="true"
    android:supportsRtl="true"
    android:theme="@style/AppTheme">

Whether your application's processes should be created with a large Dalvik heap. This applies to all processes created for the application. It only applies to the first application loaded into a process; if you're using a shared user ID to allow multiple applications to use a process, they all must use this option consistently or they will have unpredictable results. Most apps should not need this and should instead focus on reducing their overall memory usage for improved performance. Enabling this also does not guarantee a fixed increase in available memory, because some devices are constrained by their total available memory.

To query the available memory size at runtime, use the methods getMemoryClass() or getLargeMemoryClass().

Use Coroutines for long or heavy operations. These crashes are coming from Rxjava . Maybe you are not performing work accurately in that.

I am just going to have a stab at where you could look, from the stack trace it looks like you are using a schedular to perform tasks. My suspicion is that you are running multiple threads, and as each thread requires its own allocation of memory some thing to consider would be: Controlling the number of threads through a thread pool , this will cap the number of threads available and recycle threads instead of allocating new ones and potentially having a significant number of threads running at the same time.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM