colt mcanlis: oneimportant part of writing successful applicationsis enabling the programmer to be as efficient aspossible so that they can focus their brain cycles onthe truly important problems. but sometimes thisease of development
Android ndk, can create newperformance problems that aren't always clear. my name is colt mcanlis. and while the androidruntime provides
lots of opportunitiesfor programmers to be more efficient,it also presents a lot of hidden pitfallswith respect to performance. and the single biggest onethat you need to worry about has everything to do with howyou're allocating and using memory. see, many programming languagesthat are known for being "high performance," like a c and c++,usually require programmers to manage memory themselves.
that is, the programmer isresponsible for allocating blocks of memory off of theheap during code execution, and explicitly freeingthem back to the heap once they aren'tbeing used anymore. but in a 2 millionline code base, it's easy to getlost in logic flows and end up not freeingallocated memory as intended. these types of allocationsare called leaks-- that is, memory which wasallocated but never freed.
now, a managed memoryenvironment, on the other hand, removes this burden of freeingmemory from the programmer's shoulders. see, it keeps track ofeach memory allocation. and once it determinesthat a piece of memory is no longer beingused by the program, it can free it back to theheap without any intervention from the programmer. which is great, because we canspend that extra time doing
other things-- like arguingabout whether or not crossguards workon light sabers. anyhow, the processof reclaiming memory in a managed environmentis known as garbage collection. it's a concept that was createdby john mccarthy back in 1959 to solve problems in thelisp programming language. and it generally adheresto two primary principles-- find data objects in a programthat cannot be accessed by in the future, and reclaimthe resources used by those
objects. now, think about it. garbage collectioncan be really gnarly. i mean, if you've gotsome 20,000 allocations in your programright now, which ones aren't being needed anymore? or better yet, when should youexecute a garbage collection event to free up memorythat isn't being used? these are actually verydifficult questions.
and thankfully, we've had about50 years worth of innovation to improve on them. which is why the garbagecollector in android runtime is quite a bit moresophisticated than mccarthy's original proposal. it's been built to be fast andas non-intrusive as possible. effectively, the memoryheap in android's runtime are segmented into spacesbased on the type of allocation and how best the system canorganize those allocations
for future gc events. as a new object is allocated,the characteristics are taken intoaccount to best fit what space it shouldbe placed into, depending on what version of theandroid runtime you're using. and here's the important part. each space has a set size. as objects areallocated, we keep track of the combined sizes.
and as a space startsto grow, the system will need to execute a garbagecollection event in an attempt to free up memory forfuture allocations. now, it's worth pointingout that gc events will behave differently dependingon which android runtime you're using. for example, indalvik, many gc events are "stop the world" events. meaning that any managedcode that is running
will stop until theoperation completes. which can get veryproblematic when these gcs takelonger than normal, or there's a ton ofthem happening at once, since it's going tosignificantly eat into your frame time. now, art, on theother hand, extended the functionality of aconcurrent gc system, which tends to remove the largergc pauses, but will still
incur a small pause at theend of important gc events. and to be clear, our engineershave spent a lot of time making sure thatthese events are as fast as possible toreduce interruptions. that being said, this canstill cause your application some performance headaches. firstly, understand thatthe more time your app is spending doinggcs in a given frame, the less time it's gotfor the rest of the logic
needed to keep you underthe 16 millisecond barrier for rendering. so if you've got a lotof gcs, or some long ones right after each other, it mightpush your frame processing time over the 16 millisecondbarrier, which can cause a visible hitchingor jink for your users. secondly, understandthat your code flow may be doing thekinds of work that force gcs to occur more oftenor making them last longer
than normal durations. for example, ifyou're allocating a horde worth of objects inthe innermost part of a loop that runs for areally long time, then you're going to bepolluting your memory heap with a lot of objects. and you'll end up kickingoff a lot of gc events quickly due to thisadditional memory pressure. and these types of programmingpatterns are easier to run into
than you'd think. so thankfully, theandroid sdk has a set of powerfultools at your disposal. for example, you canget a high-level view of how your applicationis managing memory using the memory monitor toolinside of android studio. every time you see a dipin the allocated memory, that's a gc event occurring. lots of dips in ashort time could
signal a performance problem. and you can see what objectsare active in your heap, and what parts of yourcode are allocating them, with the heap and allocationtracker tools as well. but wrangling memoryinto performant uses is easier said than done. which is why you need to checkout the rest of our android performance patternscontent for other great ways to improve performance.
and don't forget to join ourgoogle+ community for excellent
info as well. so keep calm, profile yourcode, and always remember, perf matters. [music playing]