task #9268
openCheck cdm for GC G1 humongous objects problem
10%
Description
problems of the G1 when there are Objects >= 1MB in the heap:
https://dzone.com/articles/whats-wrong-with-big-objects-in-java
bei Exporten oder Importen betreffen, oder beim Indizieren.
If it turns out that the cdm is affected it might be a good idea to upgrade to java 11
I found this page on diagnosing humongous object allocation which seems to be of great help:
https://plumbr.io/handbook/gc-tuning-in-practice/other-examples/humongous-allocations
Files
Related issues
Updated by Andreas Kohlbecker almost 3 years ago
- Related to task #6981: Migrate to Java 11 added
Updated by Andreas Kohlbecker almost 3 years ago
- Tags changed from performance to performance, java
- Description updated (diff)
Updated by Andreas Müller almost 3 years ago
I can't see that we have objects of size in CDM real data
Updated by Andreas Kohlbecker almost 3 years ago
I enabled G1 GC logging as described in https://plumbr.io/handbook/gc-tuning-in-practice/other-examples/humongous-allocations the to search for humongous object allocations on the test server. A couple of minutes after rebooting, the server still is starting instances, G1 Humongous Allocation are being reported:
213.066: [G1Ergonomics (Concurrent Cycles) request concurrent cycle initiation, reason: requested by GC cause, GC cause: G1 Humongous Allocation] 213.067: [GC pause (G1 Humongous Allocation) (young) (initial-mark) 213.067: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 31247, predicted base time: 83.62 ms, remaining time: 116.38 ms, target pause time: 200.00 ms] 220.232: [G1Ergonomics (Concurrent Cycles) request concurrent cycle initiation, reason: requested by GC cause, GC cause: G1 Humongous Allocation] 220.416: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: concurrent cycle already in progress, GC cause: G1 Humongous Allocation] 374.271: [G1Ergonomics (Concurrent Cycles) request concurrent cycle initiation, reason: requested by GC cause, GC cause: G1 Humongous Allocation] 374.324: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: concurrent cycle already in progress, GC cause: G1 Humongous Allocation] 385.183: [G1Ergonomics (Concurrent Cycles) request concurrent cycle initiation, reason: requested by GC cause, GC cause: G1 Humongous Allocation] 385.183: [G1Ergonomics (Concurrent Cycles) request concurrent cycle initiation, reason: requested by GC cause, GC cause: G1 Humongous Allocation] 385.183: [GC pause (G1 Humongous Allocation) (young) (initial-mark) 385.183: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 40507, predicted base time: 76.38 ms, remaining time: 123.62 ms, target pause time: 200.00 ms] 392.099: [G1Ergonomics (Concurrent Cycles) request concurrent cycle initiation, reason: requested by GC cause, GC cause: G1 Humongous Allocation] 392.219: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: concurrent cycle already in progress, GC cause: G1 Humongous Allocation]
so there is strong evidence that we in deed have big sized objects!
Updated by Andreas Kohlbecker almost 3 years ago
- Assignee changed from Andreas Müller to Andreas Kohlbecker
- Target version changed from Unassigned CDM tickets to Release 5.19
- % Done changed from 0 to 10
Updated by Andreas Kohlbecker almost 3 years ago
- Status changed from New to In Progress
Updated by Andreas Kohlbecker almost 3 years ago
- File g1-humongous-allocations.txt g1-humongous-allocations.txt added
- % Done changed from 10 to 20
further results from the test server after startup and running the Data Portal Cacher for E+M by 8,5% (g1-humongous-allocations.txt). 176 G1 Humongous Allocations have been reported.
These big objects may be result sets from database queries, but other cases are also possible.
Updated by Andreas Kohlbecker almost 3 years ago
- Status changed from In Progress to Feedback
- Assignee changed from Andreas Kohlbecker to Andreas Müller
- % Done changed from 20 to 10
this finding can be especially relevant for imports and exports, therefore we should examine I/O functionalities which have been reported to cause problems in the near past. Isn't it that Walter had Problems a couple of weeks ago?
Do you remember anything like this Andreas & Katja?
Updated by Katja Luther almost 3 years ago
Andreas Kohlbecker wrote:
this finding can be especially relevant for imports and exports, therefore we should examine I/O functionalities which have been reported to cause problems in the near past. Isn't it that Walter had Problems a couple of weeks ago?
Do you remember anything like this Andreas & Katja?
Yes the cdmlight export for larger subtrees or a whole classification can cause memory problems.
Updated by Andreas Müller over 2 years ago
- Target version changed from Release 5.19 to Release 5.21
Updated by Andreas Müller over 2 years ago
- Target version changed from Release 5.21 to Release 5.22
Updated by Andreas Müller over 2 years ago
- Status changed from Feedback to New
- Target version changed from Release 5.22 to Release 5.44
Updated by Andreas Müller over 1 year ago
- Target version changed from Release 5.44 to Release 5.42
Updated by Andreas Müller 9 months ago
- Tags changed from performance, java to performance