-
Notifications
You must be signed in to change notification settings - Fork 545
Increased amount of ANRs after disabling concurrent GC #9365
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Comments
@TimBurik I'm afraid we can't do anything for you here, this appears to be an issue split between MonoVM and the Sentry Native SDK. @lambdageek do you think someone on your side could take a look to see if anything can be done here? There's an issue being worked on related to Sentry, #9055, perhaps this here issue is also caused by the same problems? @supervacuus, sorry for tagging you out of the blue, but would you be able to look into the Sentry side of things here? I realize we have precious little information, but perhaps someone will be able to spot something in their respective areas and help out. |
@grendello thank you for the response. Regarding #9055 - we are aware of this issue, and we have a separate native crash group, which is most likely related to this issue. But the ANRs doesn't seem to be related to this issue and to Sentry, because we are seeing exactly the same picture in the GooglePlay Console - increased amount of ANRs, all containing |
The GC ANRs might be related to something inside the GC bridge, hopefully @lambdageek will be able to help out here. However, this ANR looks familiar:
You are most likely not using marshal methods, since they weren't enabled by default in NET8, but we've fixed an issue recently which was related to them and one of the repros had the above stack trace as well. The problem was related to Java ("native") threads being incorrectly attached to the MonoVM runtime and putting the GC in a bad state. You can find more info about it in this and the following comments. @filipnavara, does this look familiar? Perhaps something in your app causes a similar corruption of the GC state? @TimBurik if you're able to reproduce the ANRs locally, would you be able to test with NET9 rc2? |
@grendello thank you for the pointers! We are planning to test the app on .NET 9 in the nearest future, we would make sure to use RC2 for those tests (when RC2 would be available). Also, we have added debug symbols for libmonosgen-2.0 into GooglePlay Console and now we have more detailed logs. For example, this is the reported main thread stack trace from one of the ANRs:
|
@TimBurik this more and more looks like the issue fixed by @filipnavara, he declared he'd take a look next week - it would be great if you were able to test with NET9 RC2 before then :) |
@TimBurik if RC2 isn't available by next week, do try with RC1 - it's to see if the issues you're seeing still even exist in NET9 |
I've been following the thread since you tagged me. It seems this isn't sentry-related. I've informed sentry peeps, just in case. |
@supervacuus thanks, I tagged you just in case - this is all low-level stuff, it might be affecting Sentry as well in some ways. |
/cc @vitek-karas |
@grendello sorry for late response
|
I don't know how the ANR system works, I would imagine they happen if the app is non responsive for a while. If this is the case, I'm not sure whether this issue can actually be considered to be a bug. We seem to have a problem with the concurrent GC in the runtime and this issue needs to be investigated somehow. Disabling concurrent GC will just lead to longer GC pause times which seem to trigger this ANR. Is a GC pause time of 0.5 - 1s supposed to trigger this ANR ? If this is true, it seems to me that the best way to move forward would be to share the application so we can see if the serial GC takes longer than expected for whatever reason and mainly to try some debug flags to see if we detect some problems with the concurrent collector. |
@BrzVlad according to the Android documentation, one of the reasons for ANR is when BroadcastReceiver has failed to process an event within 5 seconds - this seems to be our case, as there's an additional description in the GooglePlay Console like "Broadcast of Intent { act=android.intent.action.SCREEN_ON }" By the way, let me share a full report from the GooglePlay Console, which contains stack traces from all the threads: anr_all_stacktraces.txt I could also try to gather additional information regarding the GC performance, but in general we didn't notice any significant performance degradation after disabling concurrent GC |
@grendello in our case most of the ANRs are related to the But this is not the only source for ANRs for us, as we also have reports:
All of those reports have very similar main thread stack traces, related to GC. It's even seems that those broadcast events, service calls and input events are not really causing the issue, and the main thread is already blocked by the time these events occur. |
@grendello @BrzVlad is there anything else we could do to help you shed the light on this issue? |
Seing that this issue happens after disabling concurrent GC I was biased into assuming that the GCs simply take longer to run and this is causing issues. After a second look at the stacktraces, it indeed looks like the suspend machinery is hanging. It makes no sense to me why this issue would only happen with serial GC though. We have an env var |
@BrzVlad could you also give a hint to where the threads dump could be found? In logcat we see the message: We are using the following values as Android environment variables:
and also we are using the |
I would have expected that the suspend state of each thread to be printed but it looks like that logging path doesn't go through the android log but rather to the console. It would be useful for us to try to reproduce this failure but given the abort limit is not excessive and it only happens while in background, it makes me wonder whether it just the os scheduling the app less frequently and the suspend process just taking a bit longer than usual, because threads reach GC safepoint location slower. |
This issue doesn't seem to affect every GC in the background, in the log cat we also see a couple of GC-related logs:
I'm also gonna try to reproduce this issue with bigger values for |
@BrzVlad Just managed to reproduce the issue with limit value of 1s:
The reproduction steps are the same - put application to the background and wait, although this time it required also several tries (foregrounding/bacgrounding app back) and considerable amount of waiting (30m+ after each backgrounding) |
Would it be possible to provide the sample app with some reproduction steps ? |
Let me try to reproduce the issue on the sample app, and also discuss with the legal team possibility to share our production code. Meanwhile, I have found a similar issue with a deadlock in GC stopping the world mono/mono#21582, which ended up being fixed by changing the value of the |
@TimBurik I also looked at your latest ANR dumps: ANR_suspend_signal_handler_full1.txt: This one seems to be truncated so there are threads cut out but I it still looks like we hit a rather interesting scenario. Main thread does some work that will hold loader lock, taken in mono_class_vtable_checked. Then there is "Thread-4" that runs finalizers and does a stackwalk, but it does a suspend and run to do the stackwalk, so I assume it has suspended Main thread while holding the loader lock. Then as part of the stackwalk it hits a case where it needs to create a class instance, that needs the loader lock, leading to a deadlock. The only way to request a thread dump that will trigger the stackwalk is to explicit call mono_threads_request_thread_dump API. In mono that happens only in the SIGQUIT handler, so that signal seems to been fired and this is a side effect leading up to the deadlock. I can probably fix this issue by making sure the stackwalk done in response to mono_threads_request_thread_dump is async safe since its designed to be called from within signal handlers, so it expects to only do signal safe operations, meaning it won't try to load additional classes if not already loaded etc. Since this is a side effect of a SIGQUIT signal, it will of course not tackle why that happens, but in the dump no other threads seems to be doing anything strange. ANR_suspend_signal_handler_full2.txt: This one is slightly different, but it ends up in similar deadlock. In this case we have thread "AndroidAnrWatchDog" that calls into Mono runtime to get a type name, that leads up to a SIGSEGV (probably due to a heap corruption), that will call our SIGSEGV handler that does a native crash, triggering a full stackwalk of crashing thread. In parallel there is code triggering a GC, "Thread-14", it holds GC lock and wait for threads to suspend, another thread "GLThread 1071" gets suspended while holding loader lock and the thread handling the SIGSEGV is about stackwalk, but doesn't do it async safe, leading up to a deadlock on loader lock, so it deadlocks inside the signal handler, meaning that the suspend signal also send to this thread will wait, meaning that the GC thread will wait forever on this thread. Making sure we do async safe stackwalk from within crash handler would fix this chain of events, but it won't fix the root cause that is probably a heap corruption, but at least it should make it into a crash report intsead of an ANR. With these two dumps I have a couple of things to look at, it won't solve the underlying triggers (SIGQUIT and heap corruption) but it will at least make sure we would handle this complex error reporting scenarios better. |
Mono's stack-walker can be signal/async safe when needed, making sure it won't allocate additional memory or taking internal runtime locks. It is however up to the caller of the stack walking API's to decide if it should be signal and/or async safe. Currently this is controlled using two different flags, the MONO_UNWIND_SIGNAL_SAFE as well as mono_thread_info_is_async_context. This is problematic since callers wants signal safe stack-walking but since not both are set, it will not behave fully signal safe. dotnet/android#9365 hit a couple of scenarios described here: dotnet/android#9365 (comment) that ends up deadlocking due to the fact that they did stack-walking from within a signal handler and deadlocked or dumping stack on suspended thread holding runtime loader lock, but without making the stack-walk async safe. Fix makes sure that calls to stack-walk API's can be made signal and/or async safe and that identified areas uses the correct set of flags given state of threads when stack-walking.
Following PR should at least fix deadlocks in above ANR's, dotnet/runtime#113645. |
@TimBurik I looked back in history of this issue and realized that you had a local repro of the issue? Is that correct? If so, would it be possible for you to build a local runtime with some additional instrumentation (logging additional data into logcat) and use that together with your repro app? That way we could potentially get an idea around what thread that doesn't seem to suspend and then we could map that with the ANR to get what thread(s) we are waiting on. We could first try this with the original hybrid suspend mode and then potentially with preemptive suspend (with above fix) for even more additional information. |
@lateralusX Thank you very much for taking your time to look into the dumps!
This is actually sounds great! We were suspecting that some of those ANRs are happening while processing crashes (because of the high amount of "background" ANRs in GooglePlay Console). We would very much prefer those crashes reported as crashes, and not as ANRs, as we currently have disproportionally high ANR rate comparing to the crash rate in production.
This one seems to be actually related to another ticket of ours related to crashes: #9563 It seems that our custom ANR monitor implementation ("AndroidAnrWatchDog" thread) was hitting some kind of infinite loop while collecting the context, requesting type name information over and over and over again, probably causing heap corruption in the end. Either way, after disabling this functionality the crash group has gone, and related ANRs probably as well.
Yes, we do have a somewhat stable reproduction of the GC deadlock with a precise configuration:
Sure! Do you have any specific ideas in mind of what instrumentation we could add/turn-on in the runtime? We would be able to run the test some time next week. |
@TimBurik Sounds great. Let me write up some instructions during the week and share. Are you just using an Android SDK app (dotnet new android) or MAUI (dotnet new maui)? You will need to do a local build of the runtime enabling some additional logging in code, then you will use that version of the runtime when building your app running the reproduction step. The additional logging will hit logcat, so we will need to get some data out of logcat once the ANR reproduces, it will tell us more details on the threads that we are waiting on as part of the STW (Stop The World). When we have that data we can look in the ANR to detect what thread seems to be in wrong state and why it won't seem to reach a safe point in a timely manner. |
@lateralusX we are using Android SDK (android workload) |
* [Mono]: Fix additional stack-walks to be async safe. Mono's stack-walker can be signal/async safe when needed, making sure it won't allocate additional memory or taking internal runtime locks. It is however up to the caller of the stack walking API's to decide if it should be signal and/or async safe. Currently this is controlled using two different flags, the MONO_UNWIND_SIGNAL_SAFE as well as mono_thread_info_is_async_context. This is problematic since callers wants signal safe stack-walking but since not both are set, it will not behave fully signal safe. dotnet/android#9365 hit a couple of scenarios described here: dotnet/android#9365 (comment) that ends up deadlocking due to the fact that they did stack-walking from within a signal handler and deadlocked or dumping stack on suspended thread holding runtime loader lock, but without making the stack-walk async safe. Fix makes sure that calls to stack-walk API's can be made signal and/or async safe and that identified areas uses the correct set of flags given state of threads when stack-walking. * Add signal async safe stack unwind option. * Assert that walk_stack_full_llvm_only is only called in llvm only mode. * Correct some bool usage. * Make signal safe unwind option, signal async safe. Mono's current MONO_UNWIND_SIGNAL_SAFE was not fully signal safe since it was not async safe, that could lead to taking loader lock. This will fix MONO_UNWIND_SIGNAL_SAFE to be signal asycn safe, it will also change current use of MONO_UNWIND_SIGNAL_SAFE to MONO_UNWIND_NONE since they where equal before this fix, meaning old calls using MONO_UNWIND_SIGNAL_SAFE will behave identical using MONO_UNWIND_NONE so no regression.
…#113645) * [Mono]: Fix additional stack-walks to be async safe. Mono's stack-walker can be signal/async safe when needed, making sure it won't allocate additional memory or taking internal runtime locks. It is however up to the caller of the stack walking API's to decide if it should be signal and/or async safe. Currently this is controlled using two different flags, the MONO_UNWIND_SIGNAL_SAFE as well as mono_thread_info_is_async_context. This is problematic since callers wants signal safe stack-walking but since not both are set, it will not behave fully signal safe. dotnet/android#9365 hit a couple of scenarios described here: dotnet/android#9365 (comment) that ends up deadlocking due to the fact that they did stack-walking from within a signal handler and deadlocked or dumping stack on suspended thread holding runtime loader lock, but without making the stack-walk async safe. Fix makes sure that calls to stack-walk API's can be made signal and/or async safe and that identified areas uses the correct set of flags given state of threads when stack-walking. * Add signal async safe stack unwind option. * Assert that walk_stack_full_llvm_only is only called in llvm only mode. * Correct some bool usage. * Make signal safe unwind option, signal async safe. Mono's current MONO_UNWIND_SIGNAL_SAFE was not fully signal safe since it was not async safe, that could lead to taking loader lock. This will fix MONO_UNWIND_SIGNAL_SAFE to be signal asycn safe, it will also change current use of MONO_UNWIND_SIGNAL_SAFE to MONO_UNWIND_NONE since they where equal before this fix, meaning old calls using MONO_UNWIND_SIGNAL_SAFE will behave identical using MONO_UNWIND_NONE so no regression.
hm that document is outdated, it talks about legacy Mono. I filed #9944. |
I will follow up with details on Monday, it will be local mono dotnet/runtime Android build + use of override ResolvedRuntimePack as described here, https://github.com/dotnet/runtime/blob/main/docs/workflow/debugging/mono/android-debugging.md#native-debugging-using-a-local-debug-build-of-mono. |
@TimBurik, here are the steps need to build a local arm64 Android runtime with additional STW logging: Clone dotnet/runtime repro and checkout a branch, for .net8, https://github.com/dotnet/runtime/tree/release/8.0, for .net9 https://github.com/dotnet/runtime/tree/release/9.0. Patch
Build Android runtime locally, make sure to follow runtime pre-reqs, but ignore building runtime+tests using these instructions: Build runtime+libs using the following command line:
Once the build completes, the local build runtime can be used when building regular Android using this change to the projects .csproj file: Make sure to rebuild the app (delete previous obj/bin folders). If all worked as expected, you should now see additional logging in logcat that is prefixed with "[STW-" and uses "THREAD" as logcat tag as soon as a GC gets triggered. Once you have verified that you get needed logging in logcat, run the repro, collect logcat logging + ANR's, this should give additional information on what threads STW is waiting upon, their callstack and their current GC mode. Keep this setup around, we might need additional logging in case we still won't be able to figure out why it happens with additional enabled logging. |
@lateralusX Thank you for detailed instructions, it was very helpful! We did manage to build a local android runtime with patch you provided, and we do see additional STW messages in the logcat.
Example of the STW section in logcat:
|
@TimBurik, unfortunately not. I build and consume it from a dotnet new android template app, but I only ran it on emulator, do you see the same issues on emulator and device? Did you build release version of both runtime and libraries? I assume you also build a runtime version matching the Android SDK version you are currently using building the app, correct? Maybe it would make sense for you to first try out the template app with local build runtime and see if that works as expected? If we end up not getting this to work together with your app, we could do another hack and only replace the libmonosgen.so directly in the Android runtime pack, that way only the local build runtime shared library will be updated, a release version of libmonosgen-2.0.so is API compatible with the dotnet Android SDK given the same dotnet version. If you want to take this approach you locate the local build libmonosgen-2.0.so in artifacts/bin/microsoft.netcore.app.runtime.android-arm64/Release/runtimes/android-arm64/native and replace the version included in the installed workload package used when building the Android app, find your dotnet runtime used when building the app (dotnet --list-runtimes), then replace libmonosgen-2.0.so in packs/Microsoft.NETCore.App.Runtime.Mono.android-arm64/[version]/runtimes/android-arm64/native (keep the old .so file so you can restore later). Then do a full rebuild of the app and make sure the libmonosgen-2.0.so file ending up in the APK is the new local build version. |
yes, our application with local android runtime crashes at startup on both real device and emulator
I have used the following command, I assume it builds both runtime and libs:
Just in case, here is an full output of the build process: android_runtime_build_output.txt
I did not find any settings related to Android SDK in the manual, except providing the path for SDK and NDK via environment variables. But I have been using the same Android SDK+NDK:
I have checked the local android runtime with the simple android app (template + crash reporter initialization + button to trigger explicit nursery GC) and it works as expected both on device and emulator:
Let me try the other approach with copying |
Sorry, I was not precise what I meant. I was referring to the version of dotnet Android SDK used when building the app, you need to make sure you build the runtime using the same version as expected by the dotnet Android SDK. As said above, in this case I think we would need to go with minimal patching of libmonosgen-2.0.so to eliminate any other broken dependencies. |
@lateralusX sorry for the long response, but after a lot of trials and errors I finally managed to reproduce the ANR using the local android runtime) I did manage to make the local runtime work by using https://github.com/dotnet/runtime/tree/v9.0.2 as a base, instead of https://github.com/dotnet/runtime/tree/release/9.0 (I was using dotnet/runtime@d9d0ae0 originally, which caused crashes at startup). Regarding the reproduction itself, this is the last app-related logs we see in the logcat: ANR_reproduction_logcat_latest.txt From the first glance, it seems that the ANR is happening on the early stage of the STW, and it is hard to detect what is the cause based on these logs. Do you know about other logs we could enable to help us detect the root cause of the issue? |
@TimBurik, fantastic that you managed to get things working and that you have a repro with additional logging. I hoped that our initial logging would allow us to identify the thread id's that we are waiting on and then correlate to the ANR report and callstacks, but looks like the tid used in ANR is the kernel id while we use the pthread thread id in the logging, so we would need to tweak that logging a little to include the kernel thread id so we can correlate the thread's in our logging to the threads in the ANR report. The data collected so far gives us some more information, we block in the first phase of the STW, that is when we go over all attached threads state and if they are in RUNNING mode, STW will ask them to cooperate suspend and wait for completion, so that indicates that at least one of the threads in the list is still in RUNNING mode, probably blocked outside of runtime or managed code waiting on some external event. It is also worth noting that all threads included in the current STW was also part of previous successful STW, so at that point all threads managed to suspend successfully. In order to get the thread kernel id included in the STW logging, please apply this patch on top of the patch already applied:
We could increase the STW logging to get even more details on the suspend machinery, we just don't want to flood with logging that could preventing the issue from reproducing. In order to enable more STW logging (but not all) you can apply this patch as well:
If the patches won't apply clean (they are generated from main), it should be straightforward to do the changes manually. When you repro the ANR with the additional logging, please provide the logging + the related ANR report (symbolicate), so we can figure out what threads we currently have in our STW and where they are currently blocked as reported by the callstacks in the ANR report. Thank you so much for assisting on this, being able to run a repro on this issue with additional logging is fantastic. Looking forward to the results. |
@lateralusX thank you for your help) I have applied both of the patches from your latest message and right away managed to catch a freeze, which later was reported by the system as ANR due to input response timeout. Although this freeze/ANR was not reproduced using our usual reproduction steps, and thus might be a different unrelated issue, I have collected all available data just in case:
Meanwhile let me try to also reproduce the freeze/ANR using our usual reproduction steps, just to make sure if this is the same issue or not |
@TimBurik Took a quick first glance and it looks like it deadlock on the additional logging, its using __android_log_write_log_message that is not fully async-signal safe and since the additional logging could happen on other thread now, that might get suspended, it could deadlock. I can look into making the logging async-signal safe, but maybe we should just go with the limited set of logging first, with the new ability to track thread to ANR report, so just back out:
and run again and see if it repros in the same way as it originally did. |
@lateralusX I've disabled additional Here's all the information I managed to collect:
|
@TimBurik, fantastic, I listed the threads that are attached to MonoVM in state and appears to in RUNNING (we probably wait for them in first phase of STW) and they all seem to be waiting on external events and won't reach a safe point in managed code, blocking the STW. Since we didn't get the more detailed thread output working yet, we don't know exactly what threads from the list below that is already in BLOCKING mode vs RUNNING mode. A thread attached to the runtime in RUNNING mode (mono_thread_attach) can't wait on external events like the threads below.
If you are running 9.0.2, MarshalMethods should attach threads using mono_jit_thread_attach, fix was done here, 27b5d2e, otherwise they ended up attaching threads running in the Java thread pool in a way no compatible with hybrid suspend model. It might be worth disable them and see if things still repro. Next step is to see if we could get some more logging, at least on the STW thread to identify what threads that are still in running vs blocking state as part of first STW phase when we hit ANR, maybe fixup the dump thread to work on a schedule, like dump each second for the first couple of seconds waiting. Since we shouldn't call mono_thread_attach on these threads, we could also include an assert if that happens and we will get an abort + callstack from where the call is coming. Waiting on the thread pool on new work:
Waiting on the thread pool on new work:
Waiting on the thread pool on new work:
Waiting on the thread pool on new work:
Waiting on the thread pool on new work:
Waiting on the thread pool on new work:
Waiting on the thread pool on new work:
Waiting on the thread pool on new work:
?? :
Waiting on finalizer queue:
|
@lateralusX thank you for looking into this, and sorry for late response.
No, and as far as I know we don't run any native threads ourselves, but some third-party libraries we use - might.
I assume you referring to the Regarding the list of the threads, do I understand correctly that one/some of them are actually responsible for the ANR? I am going to try to identify those threads, but so far the one which is the easiest to identify is Meanwhile, if you have any ideas of how to further enhance logging and make it more informative for this case - I would be glad to test them out. |
@TimBurik, regarding Regarding the threads, the threads I listed above are threads attached to the runtime so they will need to reach safepoints or already be in a GC safe region and ending up being preemptive suspend. The above threads are not preemptive suspended, so they are either already in a GC safe region or in RUNNING meaning they been attached in a way not compatible with waiting on external scheduler events like they do. We would need additional thread specific logging to know what state each thread currently has. We could try to detect if we get threads attached using
and change that to:
rebuild runtime, rebuild app with changed runtime and re-run repro. So to summarize:
I will prepare some more detailed thread logging unless above gives some more clues. |
@lateralusX I've just tested both of the cases with following results:
|
@TimBurik, thanks for checking these scenarios so we can rule them out. Looks like we would need to try to extract more information from the threads to see if we can pinpoint what thread(s) that are not in either GC safe region (BLOCKING state) or in RUNNING but never reach a safepoint in timely manner. Lets start with the following patch (put on top of already implemented patches) to output state transition as part of first STW phase logging:
and
|
@lateralusX I guess we've found culprit, although a rather unexpected one, and I'm not sure what to make of it. I reproduced the ANR with additional thread state information, and the last STW log looks like this:
I assume this means that this is the thread 0x382c (14380) is blocking the GC? And in the ANR dump there's a record for this thread, but it looks rather strange:
This is one of the long running threads used in our application (either by us directly, or by one of the third-partes), we should be able to figure out which one is it if needed. But I was under the impression that AOT libs, unlike usual native libs, don't have any instructions to execute and only contain JIT-precompiled code as data block. Could this single line in the thread dump mean that Mono runtime is truing to load this AOT lib into memory? Just in case, here is the additional context I managed to collect:
I also worth pointing out, that I've made another reproduction of the ANR using the same steps, and it has all the same symptoms: one of the threads is in
where I've also checked the earlier reproduction of the ANR, and it also contains a single-aot-frame long-running thread, although there's no evidence that this thread was blocking the GC in that case:
|
@TimBurik Nice! Yes, by looking at the new logging all threads are already in BLOCKING mode, except 0x382c and 0x3787 and 0x3787 is the thread running GC so it will be ignored, that leaves us with 0x382c, since it's in RUNNING, we will ask it to suspend ASYNC_SUSPEND_REQUESTED, and then we will wait for it to reach a safepoint in managed code. Apparently, that doesn't seem to happen for this thread, since if it hits a safepoint you would see some native frames on that threads callstack, so it's still executing managed code and appears to never reach a safepoint. Looking at previous successful GC, 0x382c is in BLOCKING state during first STW phase. Looking back to what the last loggings are for that thread we get: 04-29 17:17:02.533 14215 14380 D Mono : AOT NOT FOUND: System.Runtime.CompilerServices.AsyncTaskMethodBuilder So it has at least been running that code some time before hitting the ANR. The reason why you don't see a callstack in ANR for thread 0x382c is because Android native unwinder doesn't handle C# managed frames, so it can' t unwind through them, we would need to dump the managed callstack of the thread in order to see what code its running. The AOT module includes pre-compiled native code, that we load and execute instead of running through JIT if found by loader. It normally reduces startup times since it eliminates the need to JIT compile a large number of methods during startup. Managed code will look for safepoint when calling functions, handling exceptions and on loop back-edges, AOT compiled code should also include safepoints given that they have been compiled using a matching suspend model (hybrid), but there should be guards in Mono loader to prevent mismatch between AOT suspend model used when AOT compile libraries and runtime, so the AOT code should have safepoints included. I will try too disassembly your AOT module and see if it appears to have safepoints included and also take a look at the 000000000021e564 to see what the code seems to be doing. It should also be worth testing not use the AOT compiled methods to rule out issues around missing safepoints in AOT compiled code. I believe it can be accomplished by setting msbuild property RunAOTCompilation to false when building app, but you need to double check that it does the right thing. If it works, you shouldn't get any libaot .so files in the APK. If you could get hold of the managed stack of that thread that would be great. If not, I would see if we can implement something in response to ANR to dump thread stacks to logcat, but I need to finish some other work items before pursuing that task. |
@lateralusX Thanks for the analysis! I've just managed to reproduce the ANR using the build with AOT disabled, and the thread responsible for blocking the GC has the following dump:
I suppose it means that this issue is not AOT specific. I'm not sure I would be able to get hold of the managed stacktrace exactly at the moment of the ANR without new updates to the runtime source code, but I could definitely investigate this blocking thread from the app logic perspective in the mean time. |
Android framework version
net8.0-android
Affected platform version
.NET 8.0.303
Description
After switching from Xamarin.Android to .Net8 we used to get a lot of native crashes in monosgen, for example 222003f52fa3496385d14a89c778a6e4-symbolicated.txt
After long investigation (and a hint from the issue dotnet/runtime#100311) It turns out that concurrent SGen is actually enabled by default in .net-android
android/src/Xamarin.Android.Build.Tasks/Microsoft.Android.Sdk/targets/Microsoft.Android.Sdk.DefaultProperties.targets
Line 9 in df9111d
so we explicitly disable it - and now the amount of native crashes in monosgen is minimal, but instead we are getting a lot of ANR reports in Sentry and GooglePlay Console.
ANRs seems to be reported using Android's ApplicationExitInfo mechanism, according to stacktraces main thread seems to be blocked by awaiting native mutex somewhere in the monosgen (example: anr_stacktrace.txt)
Additional information, which might be relevant:
Intent { act=android.intent.action.SCREEN_OFF }
orIntent { act=android.intent.action.SCREEN_ON }
;Steps to Reproduce
Unfortunately, we don't have exact steps to reproduce.
The only thing that is sure that it is happening when targeting
.net-android34.0
(version for Xamarin.Android doesn't have this issue) and issue started happening after adding the following to the csproj:<AndroidEnableSGenConcurrent>false</AndroidEnableSGenConcurrent>
Did you find any workaround?
No workaround found yet
Relevant log output
No response
The text was updated successfully, but these errors were encountered: