Understanding Garbage Collection in JavaScriptCore From Scratch

Jul 29, 2022

by Haoran Xu

JavaScript relies on garbage collection (GC) to reclaim memory. In this post, we will dig into JSC’s garbage collection system.

Before we start, let me briefly introduce myself. I am Haoran Xu, a PhD student at Stanford University. While I have not yet contributed a lot to JSC, I found JSC a treasure of elegant compiler designs and efficient implementations, and my research is exploring ways to transfer JSC’s design to support other programming languages like Lua at a low engineering cost. This post was initially posted on my blog — great thanks to the WebKit project for cross-posting it on their official blog!

Filip Pizlo’s blog post on GC is great at explaining the novelties of JSC’s GC, and also positions it within the context of various GC schemes in academia and industry. However, as someone with little GC background, I felt the blog alone insufficient for me to get a solid understanding of the algorithm and the motivation behind the design. Through digging into the code, and with some great help from Saam Barati, one of JSC’s lead developers, I wrote up this blog post in the hope that it can help more people understand this beautiful design.

The garbage collector in JSC is non-compacting , generational and mostly [1] – concurrent . On top of being concurrent, JSC’s GC heavily employs lock-free programming for better performance.

As you can imagine, JSC’s GC design is quite complex. Instead of diving into the complex invariants and protocols, we will start with a simple design, and improve it step by step to converge at JSC’s design. This way, we not only understand why JSC’s design works, but also how JSC’s design was constructed over time.

But first of all, let’s get into some background.

Memory Allocation in JSC

Memory allocators and GCs are tightly coupled by nature – the allocator allocates memory to be reclaimed by the GC, and the GC frees memory to be reused by the allocator. In this section, we will briefly introduce JSC’s memory allocators.

At the core of the memory allocation scheme in JSC is the data structure BlockDirectory [2] . It implements a fixed-sized allocator, that is, an allocator that only allocates memory chunks of some fixed size S . The allocator keeps tracks of a list of fixed-sized (in current code, 16KB) memory pages (“blocks”) it owns, and a free list. Each block is divided into cells of size S , and has a footer at its end [3] , which contains metadata needed for the GC and allocation, e.g., which cells are free. By aggregating and sharing metadata at the footer, it both saves memory and improves performance of related operations: we will go into the details later in this post.

When a BlockDirectory needs to make an allocation, it tries to allocate from its free list. If the free list is empty, it tries to iterate through the blocks it owns [4] , to see if it can find a block containing free cells (which are marked free by GC). If yes, it scans the block footer metadata to find out all the free cells [5] in this block, and put into the free list. Otherwise, it allocates a new block from malloc [6] . Note that this implies a BlockDirectory ’s free list only contains cells in one block: this is called m_currentBlock in the code, and we will revisit this later.

BlockDirectory is used as the building block to build the memory allocators in JSC. JSC employs three kinds of allocators:

  • CompleteSubspace : this is a segregated allocator responsible for allocating small objects (max size about 8KB). Specifically, there is a pre-defined list of exponentially-growing size-classes [7] , and one BlockDirectory is used to handle allocation for each size class. So to allocate an object, you find the smallest size class large enough to hold the object, and allocate from the directory for that size class.
  • PreciseAllocation : this is used to handle large allocations that cannot be handled by the CompleteSubspace allocator [8] . It simply relies on the standard (malloc-like) memory allocator, though in JSC a custom malloc implementation called libpas is used. The downside is that since a PreciseAllocation is created on a per-object basis, the GC cannot aggregate and share metadata information of multiple objects together to save memory and improve performance (as MarkedBlock ’s block footer did). Therefore, every PreciseAllocation comes with a whopping overhead of a 96-byte GC header to store the various metadata information needed for GC for this object (though this overhead is justified since each allocation is already at least 8KB).
  • IsoSubspace : each IsoSubspace is used to allocate objects of a fixed type with a fixed size. So each IsoSubspace simply holds a BlockDirectory to do allocation (though JSC also has an optimization for small IsoSubspace by making them backed by PreciseAllocation [9] ). This is a security hardening feature that makes use-after-free-based attacks harder [10] .

IsoSubspace is mostly a simplified CompleteSubspace , so we will ignore it for the purpose of this post. CompleteSubspace is the one that handles the common case: small allocations, and PreciseAllocation is mostly the rare slow path for large allocations.

Generational GC Basics

In JSC’s generational GC model, the heap consists of a small “new space” (eden), holding the newly allocated objects, and a large “old space” holding the older objects that have survived one GC cycle. Each GC cycle is either an eden GC or a full GC . New objects are allocated in the eden. When the eden is full, an eden GC is invoked to garbage-collect the unreachable objects in eden. All the surviving objects in eden are then considered to be in the old space [11] . To reclaim objects in the old space, a full GC is needed.

The effectiveness of the above scheme relies on the so-called “generational hypothesis”:

  • Most objects collected by the GC are young objects (died when they are still in eden), so an eden GC (which only collects the eden) is sufficient to reclaim most newly allocated memory.

Pointers from old space to eden is much rarer than pointers from eden to old space or pointers from eden to eden, so an eden GC’s runtime is approximately linear to the size of the eden, as it only needs to start from a small subset of the old space. This implies that the cost of GC can be amortized by the cost of allocation.

Inlined vs. Outlined Metadata: Why?

Practically every GC scheme uses some kind of metadata to track which objects are alive. In this section, we will explain how the GC metadata is stored in JSC, and the motivation behind its design.

In JSC, every object managed by the GC carries the following metadata:

  • Every object managed by the GC inherits the JSCell class, which contains a 1-byte member cellState . This cellState is a color marker with two colors: white and black [12] .

Every object also has two out-of-object metadata bits: isNew [13] and isMarked . For objects allocated by PreciseAllocation , the bits reside in the GC header. For objects allocated by CompleteSubspace , the bits reside in the block footer.

This may seem odd at first glance since isNew and isMarked could have been stored in the unused bits of cellState . However, this is intentional.

The inlined metadata cellState is easy to access for the mutator thread (the thread executing JavaScript code), since it is just a field in the object. However, it has bad memory locality for the GC and allocators, which need to quickly traverse through all the metadata of all objects in some block owned by CompleteSubspace (which is the common case). Outlined metadata have the opposite performance characteristics: they are more expensive to access for the mutator thread, but since they are aggregated into bitvectors and stored in the block footer of each block, GC and allocators can traverse them really fast.

So JSC keeps both inlined and outlined metadata to get the better of both worlds: the mutator thread’s fast path will only concern the inlined cellState , while the GC and allocator logic can also take advantage of the memory locality of the outlined bits isNew and isMarked .

Of course, the cost of this is a more complex design… so we have to unfold it bit by bit.

A Really Naive Stop-the-World Generational GC

Let’s start with a really naive design just to understand what is needed. We will design a generational, but stop-the-world (i.e. not incremental nor concurrent) GC, with no performance optimizations at all. In this design, the mutator side transfers control to the GC subsystem at a “safe point” [14] to start a GC cycle (eden or full). The GC subsystem performs the GC cycle from the beginning to the end (as a result, the application cannot run during this potentially long period, thus “stop-the-world”), and then transfer control back to the mutator side.

For this purpose, let’s temporarily forget about CompleteSubspace : it is an optimized version of PrecisionAllocation for small allocations, and while it is an important optimization, it’s easier to understand the GC algorithm without it.

It turns out that in this design, all we need is one isMarked bit. The isMarked bit will indicate if the object is reachable at the end of the last GC cycle (and consequently, is in the old space, since any object that survived a GC cycle is in old space). All objects are born with isMarked = false .

The GC will use a breadth-first search to scan and mark objects. For full GC, we want to reset all isMarked bits to false at the beginning, and do a BFS to scan and mark all objects reachable from GC roots . Then all the unmarked objects are known to be dead. For an eden GC, we only want to scan the eden space. Fortunately, all objects in the old space are already marked at the end of the previous GC cycle, so they are naturally ignored by the BFS, so we can simply reuse the same BFS algorithm in full GC. In pseudo-code:

Eden GC preparation phase: no work is needed.

Full GC preparation phase [15] :

Eden/Full GC marking phase:

Eden/Full GC collection phase:

But where does the scan start, so that we can scan through every reachable object? For full GC, the answer is clear: we just start the scan from all GC roots [16] . However, for an eden GC, in order to reliably scan through all reachable objects, the situation is slightly more complex:

  • Of course, we still need to push the GC roots to the initial queue.

If an object in the old space contains a pointer to an object in eden, we need to put the old space object to the initial queue [17] .

The invariant for the second case is maintained on the mutator side. Specifically, whenever one writes a pointer slot of some object A in the heap to point to another object B , one needs to check if A.isMarked is true and B.isMarked is false . If so, one needs to put A into a “remembered set”. An eden GC must treat the objects in the remembered set as if they were GC roots. This is called a WriteBarrier . In pseudo-code:

Getting Incremental

A stop-the-world GC isn’t optimal for production use. A GC cycle (especially a full GC cycle) can take a long time. Since the mutator (application logic) cannot run during the stop-the-world period, the application would appear irresponsive to the user, which can be a very bad user experience for long pauses.

A natural way to shorten this irresponsive period is to run GC incrementally: at safe points, the mutator transfers control to the GC. The GC only runs for a short time, doing a portion of the work for the current GC cycle (eden or full), then return control to the mutator. This way, each GC cycle is split into many small steps, so the irresponsive periods are less noticeable to the user.

Incremental GC poses a few new challenges to the GC scheme.

The first challenge is the extra interference between the GC and the mutator: the mutator, namely the allocator and the WriteBarrier , must be prepared to see states arisen from a partially-completed GC cycle. And the GC side must correctly mark all reachable objects despite changes made by the mutator side in between.

Specifically, our full GC must change: imagine that the full GC scanned some object o and handed back control to mutator, then the mutator changed a field of o to point to some other object dst . The object dst must not be missed from scanning. Fortunately, in such a case o will be isMarked and dst will be !isMarked (if dst has isMarked then it has been scanned, so there’s nothing to worry about), so o will be put into the remembered set.

Therefore, for a full GC to function correctly in the incremental GC scheme, it must consider the remembered set as a GC root as well, just like the eden GC.

The other parts of the algorithm as of now can remain unchanged (we leave the proof of correctness as an exercise for the reader). Nevertheless, “what happens if a GC cycle is run partially?” is something that we must keep in mind as we add more optimizations.

The second challenge is that the mutator side can repeatedly put an old space object into the remembered set, and result in redundant work for the GC: for example, the GC popped some object o in the remembered set, traversed from it, and handed over control to mutator. The mutator modified o again, putting it back to the remembered set. If this happens too often, the incremental GC could do a lot more work than a stop-the-world GC.

The situation will get even worse once we make our GC concurrent: in a concurrent GC, since the GC is no longer stealing CPU time from the mutator, the mutator gets higher throughput, thus will add even more objects into the remembered set. In fact, JSC observed up to 5x higher memory consumption without any mitigation. Therefore, two techniques are employed to mitigate this issue.

The first and obvious mitigation is to have the GC scan the remembered set last: only when the queue has otherwise been empty do we start popping from the remembered set. The second mitigation employed by JSC is a technique called Space-Time Scheduler . In short, if it observes that the mutator was allocating too fast, the mutator would get decreasingly less time quota to run so the GC can catch up (and in the extreme case, the mutator would get zero time quota to run, so it falls back to the stop-the-world approach). Filip Pizlo’s blog post has explained it very clearly, so feel free to take a look if you are interested.

Anyways, let’s update the pseudo-code for the eden/full GC marking phase:

Incorporate in CompleteSubspace

It’s time to get our CompleteSubspace allocator back so we don’t have to suffer the huge per-object GC header overhead incurred by PreciseAllocation .

For PreciseAllocation , the actual memory management work is done by malloc : when the mutator wants to allocate an object, it just malloc s it, and when the GC discovers a dead object, it just free s it.

CompleteSubspace introduces another complexity, as it only allocates/deallocates memory from malloc at 16KB-block level, and does memory management itself to divide the blocks into cells that it serves to the application. Therefore, it has to track whether each of its cells is available for allocation. The mutator allocates from the available cells, and the GC marks dead cells as available for allocation again.

The isMarked bit is not enough for the CompleteSubspace allocator to determine if a cell contains a live object or not: newly allocated objects have isMarked = false but are clearly live objects. Therefore, we need another bit.

In fact, there are other good reasons that we need to support checking if a cell contains a live object or not. A canonical example is the conservative stack scanning: JSC does not precisely understand the layout of the stack, so it needs to treat everything on the stack that could be pointers and pointing to live objects as a GC root, and this involves checking if a heap pointer points to a live object or not.

One can easily imagine some kind of isLive bit that is true for all live objects, which is only flipped to false by the GC when the object is dead. However, JSC employed a slightly different scheme, which is needed to facilitate optimizations that we will mention later.

As you have seen earlier, the bit used by JSC is called isNew .

However, keep in mind: you should not think of isNew as a bit that tells you anything related to its name, or indicates anything by itself. You should think of it as a helper bit, which sole purpose is that, when working together with isMarked , they tell you if a cell contains a live object or not. This thinking mode will be more important in the next section when we introduce logical versioning.

The core invariant around isNew and isMarked is:

  • At any moment, an object is dead iff its isNew = false and isMarked = false .

If we were a stop-the-world GC, then to maintain this invariant, we only need the following:

  • When an object is born, it has isNew = true and isMarked = false .

At the end of each eden or full GC cycle, we set isNew of all objects to false .

Then, all newly-allocated objects are live because its isNew is true . At the end of each GC cycle, an object is live iff its isMarked is true , so after we set isNew to false (due to rule 2), the invariant on what is a dead object is maintained, as desired.

However, in an incremental GC, since the state of a partially-run GC cycle can be exposed to mutator, we need to ensure that the invariant holds in this case as well.

Specifically, in a full GC, we reset all isMarked to false at the beginning. Then, during a partially-run GC cycle, the mutator may see a live object with both isMarked = false (because it has not been marked by the GC yet), and isNew = false (because it has survived one prior GC cycle). This violates our invariant.

To fix this, at the beginning of a full GC, we additionally do isNew |= isMarked before clearing isMarked . Now, during the whole full GC cycle, all live objects must have isNew = true [18] , so our invariant is maintained. At the end of the cycle, all isNew bits are cleared, and as a result, all the unmarked objects become dead, so our invariant is still maintained as desired. So let’s update our pseudo-code:

Full GC preparation phase:

In CompleteSubspace allocator, to check if a cell in a block contains a live object (if not, then the cell is available for allocation):

Logical Versioning: Do Not Sweep!

We are doing a lot of work at the beginning of a full GC cycle and at the end of any GC cycle, since we have to iterate through all the blocks in CompleteSubspace and update their isMarked and isNew bits. Despite that the bits in one block are clustered into bitvectors thus have good memory locality, this could still be an expensive operation, especially after we have a concurrent GC (as this stage cannot be made concurrent). So we want something better.

The optimization JSC employs is logical versioning. Instead of physically clearing all bits in all blocks for every GC cycle, we only bump a global “logical version”, indicating that all the bits are logically cleared (or updated). Only when we actually need to mark a cell in a block during the marking phase do we then physically clear (or update) the bitvectors in this block.

You may ask: why bother with logical versioning, if in the future we still have to update the bitvectors physically anyway? There are two good reasons:

  • If all cells in a block are dead (either died out during this GC cycle [19] , or already dead before this GC cycle), then we will never mark anything in the block, so logical versioning enabled us to avoid the work altogether. This also implies that at the end of each GC cycle, it’s unnecessary to figure out which blocks become completely empty, as logical versioning makes sure that these empty blocks will not cause overhead to future GC cycles.

The marking phase can be done concurrently with multiple threads and while the mutator thread is running (our scheme isn’t concurrent now, but we will do it soon), while the preparation / collection phase must be performed with the mutator stopped. Therefore, shifting the work to the marking phase reduces GC latency in a concurrent setting.

There are two global version number g_markVersion and g_newVersion [20] . Each block footer also stores its local version number l_markVersion and l_newVersion .

Let’s start with the easier case: the logical versioning for the isNew bit.

If you revisit the pseudo-code above, in GC there is only one place where we write isNew : at the end of each GC cycle, we set all the isNew bits to false . Therefore, we simply bump g_newVersion there instead. A local version l_newVersion smaller than g_newVersion means that all the isNew bits in this block have been logically cleared to false .

When the CompleteSubspace allocator allocates a new object, it needs to start with isNew = true . One can clearly do this directly, but JSC did it in a trickier way that involves a block-level bit named allocated for slightly better performance. This is not too interesting, so I deferred it to the end of the post , and our scheme described here right now will not employ this optimization (but is otherwise intentionally kept semantically equivalent to JSC):

  • When a BlockDirectory starts allocating from a new block, it update the the block’s l_newVersion to g_newVersion , and set isNew to true for all already-allocated cells (as the block may not be fully empty), and false for all free cells.

Whenever it allocates a cell, it sets its isNew to true.

Why do we want to bother setting isNew to true for all already-allocated cells in the block? This is to provide a good property. Since we bump g_newVersion at the end of every GC cycle, due to the scheme above, for any block with latest l_newVersion , a cell is live if and only if its isNew bit is set. Now, when checking if a cell is live, if its l_newVersion is the latest, then we can just return isNew without looking at isMarked , so our logic is simpler.

The logical versioning for the isMarked bit is similar. At the beginning of a full GC cycle, we bump the g_markVersion to indicate that all mark bits are logically cleared. Note that the global version is not bumped for eden GC, since eden GC does not clear isMark bits.

There is one extra complexity: the above scheme would break down in an incremental GC. Specifically, during a full GC cycle, we have logically cleared the isMarked bit, but we also didn’t do anything to the isNew bit, so all cells in the old space would appear dead to the allocator. In our old scheme without logical versioning, this case is prevented by doing isNew |= isMarked at the start of the full GC, but we cannot do it now with logical versioning.

JSC solves this problem with the following clever trick: during a full GC, we should also accept l_markVersion that is off-by-one. In that case, we know the isMarked bit accurately reflects whether or not a cell is live, since that is the result of the last GC cycle. If you are a bit confused, take a look at footnote [21] for a more elaborated case discussion. It might also help to take a look at the comments in the pseudo-code below:

Before we mark an object in CompleteSubspace , we need to update the l_markVersion of the block holding the cell to the latest, and materialize the isMarked bits of all cells in the block. That is, we need to run the logic at the full GC preparation phase in our old scheme: isNew |= isMarked , isMarked = false for all cells in the block. This is shown below.

A fun fact: despite that what we conceptually want to do above is isNew |= isMarked , the above code never performs a |= at all 🙂

And also, let’s update the pseudo-code for the preparation GC logic:

With logical versioning, the GC no longer sweeps the CompleteSubspace blocks to reclaim dead objects: the reclamation happens lazily, when the allocator starts to allocate from the block. This, however, introduces an unwanted side-effect. Some objects use manual memory management internally: they own additional memory that are not managed by the GC, and have C++ destructors to free that memory when the object is dead. This improves performance as it reduces the work of the GC. However, now we may not immediately sweep dead objects and run destructors, so the memory that is supposed to be freed by the destructor could be kept around indefinitely if the block is never allocated from. To mitigate this issue, JSC will also periodically sweep blocks and run the destructors of the dead objects. This is implemented by IncrementalSweeper , but we will not go into details.

To conclude, logical versioning provides two important optimizations to the GC scheme:

  • The so-called “sweep” phase of the GC (to find and reclaim dead objects) is removed for CompleteSubspace objects. The reclamation is done lazily. This is clearly better than sweeping through the block again and again in every GC cycle.
  • The full GC does not need to reset all isMarked bits in the preparation phase, but only lazily reset them in the marking phase by aboutToMark : this not only reduces work, but also allows the work to be done in parallel and concurrently while the mutator is running, after we make our GC scheme concurrent.

Optimizing WriteBarrier: The cellState Bit

As we have explained earlier, whenever the mutator modified a pointer of a marked object o to point to an unmarked object, it needs to add o to the “remembered set”, and this is called the WriteBarrier . In this section, we will dig a bit deeper into the WriteBarrier and explain the optimizations around it.

The first problem with our current WriteBarrier is that the isMarked bit resides in the block footer, so retrieving its value requires quite a few computations from the object pointer. Also it doesn’t sit in the same CPU cache line as the object, which makes the access even slower. This is undesirable as the cost is paid for every WriteBarrier , regardless of if we add the object to the remembered set.

The second problem is our WriteBarrier will repeatedly add the same object o to the remembered set every time it is run. The obvious solution is to make rememberedSet a hash set to de-duplicate the objects it contains, but doing a hash lookup to check if the object already exists is far too expensive.

This is where the last metadata bit that we haven’t explained yet: the cellState bit comes in, which solves both problems.

Instead of making rememberedSet a hash table, we reserve a byte (though we only use 1 bit of it) named cellState in every object’s object header, to indicate if we might need to put the object into the remembered set in a WriteBarrier . Since this bit resides in the object header as an object field (instead of in the block footer), it’s trivially accessible to the mutator who has the object pointer.

cellState has two possible values: black and white . The most important two invariants around cellState are the following:

  • For any object with cellState = white , it is guaranteed that the object does not need to be added to remembered set.
  • Unless during a full GC cycle, all black (live) objects have isMarked = true .

Invariant 1 serves as a fast-path: WriteBarrier can return immediately if our object is white , and checking it only requires one load instruction (to load cellState ) and one comparison instruction to validate it is white .

However, if the object is black , a slow-path is needed to check whether it is actually needed to add the object to the remembered set.

Let’s look at our new WriteBarrier :

The first thing to notice is that the WriteBarrier is no longer checking if dst (the object that the pointer points to) is marked or not. Clearly this does not affect the correctness: we are just making the criteria less restrictive. However, the performance impact of removing this dst check is a tricky question without a definite answer, even for JSC developers. Through some preliminary testing, their conclusion is that adding back the !isMarked(dst) check slightly regresses performance. They have two hypotheses. First, by not checking dst , more objects are put into the remembered set and need to be scanned by the GC, so the total amount of work increased. However, the mutator’s work probably decreased, as it does fewer checks and touches fewer cache lines (by not touching the outlined isMarked bit). Of course such benefit is offset because the mutator is adding more objects into the remembered set, but this isn’t too expensive either, as the remembered set is only a segmented vector. The GC has to do more work, as it needs to scan and mark more objects. However, after we make our scheme concurrent, the marking phase of the GC can be done concurrently as the mutator is running, so the latency is probably [22] hidden. Second, JSC’s DFG compiler has an optimization pass that coalesces barriers on the same object together, and it is harder for such barriers to check all the dsts .

The interesting part is how the invariants above are maintained by the relavent parties. As always, there are three actors: the mutator ( WriteBarrier ), the allocator, and the GC.

The interaction with the allocator is the simplest. All objects are born white . This is correct because newly-born objects are not marked, so have no reason to be remembered.

The interaction with GC is during the GC marking phase:

  • When we mark an object and push it into the queue, we set its cellState to white .
  • When we pop an object from the queue, before we start to scan its children, we set its cellState to black .

In pseudo-code, the Eden/Full GC marking phase now looks like the following (Line 5 and Line 9 are the newly-added logic to handle cellState , other lines unchanged):

Let’s argue why the invariant is maintained by the above code.

  • For invariant 1, note that in the above code, an object is white only if it is inside the queue (as once it’s popped out, it becomes black again), pending scanning of its children. Therefore, it is guaranteed that the object will still be scanned by the GC later, so we don’t need to add the object to remembered set, as desired.
  • For invariant 2, at the end of any GC cycle, any live object is marked, which means it has been scanned, so it is black , as desired.

Now let’s look at what WriteBarrierSlowPath should do. Clearly, it’s correct if it simply unconditionally add the object to remembered set, but that also defeats most of the purpose of cellState as an optimization mechanism: we want something better. A key use case of cellState is to prevent adding an object into the remembered set if it is already there. Therefore, after we put the object into the remembered set, we will set its cellState to white , like shown below.

Let’s prove why the above code works. Once we added an object to remembered set, we set it to white . We don’t need to add the same object into the remembered set until it gets popped out from the set by GC. But when GC pops out the object, it would set its cellState back to black , so we are good.

JSC employed one more optimization. During a full GC, we might see a black object that has isMarked = false (note that this is the only possible case that the object is unmarked, due to invariant 2). In this case, it’s unnecessary to add the object to remembered set, since the object will eventually be scanned in the future (or it became dead some time later before it was scanned, in which case we are good as well). Furthermore, we can flip it back to white , so we don’t have to go into this slow path the next time a WriteBarrier on this object runs. To sum up, the optimized version is as below:

Getting Concurrent and Getting Wild

At this point, we already have a very good incremental and generational garbage collector: the mutator, allocator and GC all have their respective fast-paths for the common cases, and with logical versioning, we avoided redundant work as much as possible. In my humble opinion, this is a good balance point between performance and engineering complexity.

However, because JSC is one of the core drivers of performance in Safari, it’s unsurprising that performance is a top priority, even at the cost of engineering complexity. To squeeze out every bit of performance, JSC made their GC concurrent. This is no easy feat: due to the nature of GCs, it’s often too slow to use locks to protect against certain race conditions, so extensive lock-free programming is employed.

But once lock-free programming is involved, one starts to get into all sorts of architecture-dependent memory reordering problems. x86-64 is the more strict architecture: it only requires StoreLoadFence() , and it provides TSO-like semantics. JSC also supports ARM64 CPUs, which has even fewer guarantees: load-load, load-store, store-load, and store-store can all be reordered by the CPU, so a lot more operations need fences. As if things were not bad enough, for performance reasons, JSC often avoids using memory fences on ARM64. They have the so-called Dependency class , which creates an implicit CPU data dependency on ARM64 through some scary assembly hacks, so they can get the desired memory ordering for a specific data-flow without paying the cost of a memory fence. As you can imagine, with all of these complications and optimizations, the code can become difficult to read.

So due to my limited expertise, it’s unsurprising if I missed to explain or mis-explained some important race conditions in the code, especially some ARM64-specific ones: if you spotted any issue in this post, please let me know.

Let’s go through the concurrency assumptions first. JavaScript is a single-threaded language, so there is always only one mutator thread [23] . Apart from the mutator thread, JSC has a bunch of compilation threads, a GC thread, and a bunch of marking threads. Only the GC marking phase is concurrent: during which the mutator thread, the compiler threads, and a bunch of marking threads are concurrently running (yes, the marking itself is also done in parallel). However, all the other GC phases are run with the mutator thread and compilation threads stopped.

Some Less Interesting Issues

First of all, clearly the isMarked and isNew bitvector must be made safe for concurrent access, since multiple threads (including marking threads and mutator) may concurrently update it. Using CAS with appropriate retry/bail mechanism is enough for the bitvector itself.

BlockFooter is harder, and needs to be protected with a lock: multiple threads could be simultaneously calling aboutToMark() , so aboutToMark() must be guarded. For the reader side (the isMarked() function, which involves first checking if l_markVersion is latest, then reading the isMarked bitvector), in x86-64 thanks to x86-TSO, one does not need a lock or any memory fence (as long as aboutToMark takes care to update l_markVersion after the bitvector). In ARM64, since load-load reordering is allowed, a Dependency is required.

Making the cellContainsLiveObject (or in JSC jargon, isLive ) check lock-free is harder, since it involves potentially reading both the isMarked bit and the isNew bit. JSC employs optimistic locking to provide a fast-path. This is not very different from an optimistic locking scheme you can find in a textbook, so I won’t dive into the details.

Of course, there are a lot more subtle issues to change. Almost all the pseudo-code above needs to be adapted for concurrency, either by using a lock or CAS, or by using some sort of memory barrier and concurrency protocol to ensure that the code works correctly under races. But now let’s turn to some more important and tricky issues.

The Race Between WriteBarrier and Marking

One of the most important races is the race between WriteBarrier and GC’s marking threads. The marking threads and the mutator thread can access the cellState of an object concurrently. For performance reasons, a lock is infeasible, so a race condition arises.

It’s important to note that we call WriteBarrier after we have written the pointer into the object. This is not only more convenient to use (especially for JIT-generated code), but also allows a few optimizations: for example, in certain cases , multiple writes to the same object may only call WriteBarrier once at the end.

With this in mind, let’s analyze why our current implementation is buggy. Suppose o is an object, and the mutator wants to store a pointer to another object target into a field f of o . The marking logic of the GC wants to scan o and append its children into the queue. We need to make sure that GC will observe the o->target pointer link.

Let’s first look at the correct logic:

This is mostly just a copy of the pseudocode in the above sections, except that we have two StoreLoadFence() . A StoreLoadFence() guarantees that no LOAD after the fence may be executed by the CPU out-of-order engine until all STORE before the fence have completed. Let’s first analyze what could go wrong without either of the fences.

Just to make things perfectly clear, the precondition is o.cellState = white (because o is in the GC’s queue) and o.f = someOldValue .

What could go wrong if the mutator WriteBarrier doesn’t have the fence? Without the fence, the CPU can execute the LOAD in line 3 before the STORE in line 1. Then, in the following interleaving:

  • [Mutator Line 3] t1 = Load(o.cellState)    // t1 = white
  • [GC Line 1] Store(o.cellState, black)
  • [GC Line 3] t2 = Load(o.f)                 // t2 = some old value
  • [Mutator Line 1] Store(o.f, target)

Now, the mutator did not add o to remembered set (because t1 is white , not black ), and t2 in GC is the old value in o.f instead of target , so GC did not push target into the queue. So the pointer link from o to target is missed in GC. This can result in target being wrongly reclaimed despite it is live.

And what could go wrong if the GC marking logic doesn’t have the fence? Similarly, without the fence, the CPU can execute the LOAD in line 3 before the STORE in line 1. Then, in the following interleaving:

  • [GC Line 3] t2 = Load(o.f)                 // t2 = some old value

Similar to above, mutator sees t1 = white and GC sees t2 = oldValue . So o is not added to remembered set, and target is not pushed into the queue, the pointer link is missed.

Finally, let’s analyze why the code behaves correctly if both fences are present. Unfortunately there is not a better way than manually enumerating all the interleavings. Thanks to the fences, Mutator Line 1 must execute before Mutator Line 3 , and GC Line 1 must execute before GC Line 3 , but the four lines can otherwise be reordered arbitrarily. So there are 4! / 2! / 2! = 6 possible interleavings. So let’s go!

Interleaving 1:

  • [GC Line 3] t2 = Load(o.f)                 // t2 = target

In this interleaving, the mutator did not add o to remembered set, but the GC sees target , so it’s fine.

Interleaving 2:

  • [GC Line 3] t2 = Load(o.f)                // t2 = some old value
  • [Mutator Line 3] t1 = Load(o.cellState)    // t1 = black

In this interleaving, the GC saw the old value, but the mutator added o to the remembered set, so the GC will eventually drain from the remembered set and scan o again, at which time it will see the correct new value target , so it’s fine.

Interleaving 3:

In this interleaving, the GC saw the new value target , nevertheless, the mutator saw t1 = black and added o to the remembered set. This is unfortunate since the GC will scan o again, but it doesn’t affect correctness.

Interleaving 4:

Same as Interleaving 3.

Interleaving 5:

  • [Mutator Line 1] store(o.f, target)
  • [GC Line 3] t2 = Load(o.f)                // t2 = target

Interleaving 6:

This proves that with the two StoreLoadFence() , our code is no longer vulnerable to the above race condition.

Another Race Condition Between WriteBarrier and Marking

The above fix alone is not enough: there is another race between WriteBarrier and GC marking threads. Recall that in WriteBarrierSlowPath , we attempt to flip the object back to white if we saw it is not marked (this may happen during a full GC), as illustrated below:

It turns out that, after setting the object white , we need to do a StoreLoadFence() , and check again if the object is marked. If it becomes marked, we need to set obj->cellState back to black .

Without the fix, the code is vulnerable to the following race:

  • [Precondition] o.cellState = black and o.isMarked = false
  • [WriteBarrier] Check isMarked()                 // see false
  • [GC Marking] CAS(o.isMarked, true), Store(o.cellState, white), pushed ‘o’ into queue
  • [GC Marking] Popped ‘o’ from queue, Store(o.cellState, black)
  • [WriteBarrier] Store(o.cellState, white)
  • [Postcondition] o.cellState = white and o.isMarked = true

The post-condition is bad because o will not be added to the remembered set in the future, despite that it needs to be (as the GC has already scanned it).

Let’s now prove why the code is correct when the fix is applied. Now the WriteBarrier logic looks like this:

  • [WriteBarrier] t1 = isMarked()
  • [WriteBarrier] if (t1 == true): Store(o.cellState, black)

Note that we omitted the first “Check isMarked()” line because it must be the first thing executed in the interleaving, as otherwise the if -check won’t pass at all.

The three lines in WriteBarrier cannot be reordered by CPU: Line 1-2 cannot be reordered because of the StoreLoadFence() , line 2-3 cannot be reordered since line 3 is a store that is only executed if line 2 is true. The two lines in GC cannot be reordered by CPU because line 2 stores to the same field o.cellState as line 1.

In addition, note that it’s fine if at the end of WriteBarrier , the object is black but GC has only executed to line 1: this is unfortunate, because the next WriteBarrier on this object will add the object to the remembered set despite it being unnecessary. However, it does not affect our correctness. So now, let’s enumerate all the interleavings again!

Interleaving 1.

  • [WriteBarrier] t1 = isMarked() // t1 = false
  • [WriteBarrier] if (t1 == true): Store(o.cellState, black)   // not executed

Object is not marked and white, OK.

Interleaving 2.

Object is in queue and white, OK.

Interleaving 3.

  • [WriteBarrier] t1 = isMarked() // t1 = true
  • [WriteBarrier] if (t1 == true): Store(o.cellState, black)   // executed

Object is in queue and black, unfortunate but OK.

Interleaving 4.

Interleaving 5.

Object is marked and black, OK.

Interleaving 6.

Interleaving 7.

Interleaving 8.

Interleaving 9.

Interleaving 10.

So let’s update our pseudo-code. However, I would like to note that, in JSC’s implementation, they did not use a StoreLoadFence() after obj->cellState = white . Instead, they made the obj->cellState = white a CAS from black to white (with memory ordering memory_order_seq_cst ). This is stronger than a StoreLoadFence() so their logic is also correct. Nevertheless, just in case my analysis above missed some other race with other components, our pseudo-code will stick to their logic…

Mutator WriteBarrier pseudo-code:

Eden/Full GC Marking phase:

Remove Unnecessary Memory Fence In WriteBarrier

The WriteBarrier is now free of hazardous race conditions. However, we are executing a StoreLoadFence() for every WriteBarrier , which is a very expensive CPU instruction. Can we optimize it?

The idea is the following: the fence is used to protect against race with GC. Therefore, we definitely need the fence if the GC is concurrently running. However, the fence is unnecessary if the GC is not running. Therefore, we can check if the GC is running first, and only execute the fence if the GC is indeed running.

JSC is even more clever: instead of having two checks (one that checks if the GC is running and one that checks if the cellState is black ), it combines them into a single check for the fast-path where the GC is not running and the object is white . The trick is the following:

  • Assume black = 0 and white = 1 in the cellState enum.
  • Create a global variable called blackThreshold . This blackThreshold is normally 0 , but at the beginning of a GC cycle, it will be set to 1 , and it will be reset back to 0 at the end of the GC cycle.
  • Now, check if obj->cellState > blackThreshold .

Then, if the check succeeded, we know we can immediately return: the only case this check can succeed is when the GC is not running and we are white (because blackThreshold = 0 and cellState = 1 is the only situation to pass the check). This way, the fast path only executes one check. If the check fails, then we fallback to the slow path, which performs the full procedure: check if GC is running, execute a fence if needed, then check if cellState is black again. In pseudo-code:

Note that there is no race between WriteBarrier and GC setting/clearing IsGcRunning() flag and changing the g_blackThreshold value, because the mutator is always stopped at a safe point (of course, halfway inside WriteBarrier is not a safe point) when the GC starts/finishes.

“Obstruction-Free Double Collect Snapshot”

The concurrent GC also introduced new complexities for the ForEachChild function used by the GC marking phase to scan all objects referenced by a certain object. Each JavaScript object has a Structure (aka, hidden class) that describes how the content of this object shall be interpreted into object fields. Since the GC marking phase is run concurrently with the mutator, and the mutator may change the Structure of the object, and may even change the size of the object’s butterfly, the GC must be sure that despite the race conditions, it will never crash by dereferencing invalid pointers and never miss to scan a child. Using a lock is clearly infeasible for performance reasons. JSC uses a so-called obstruction-free double collect snapshot to solve this problem. Please refer to Filip Pizlo’s GC blog post to see how it works.

Some Minor Design Details and Optimizations

You might find this section helpful if you want to actually read and understand the code of JSC, but otherwise feel free to skip it: these details are not centric to the design, and are not particularly interesting either. I mention them only to bridge the gap between the GC scheme explained in this post and the actual implementation in JSC.

As explained earlier, each CompleteSubspace owns a list of BlockDirectory to handle allocations of different sizes; each BlockDirectory has an active block m_currentBlock where it allocates from, and it achieves this by holding a free list of all available cells in the block. But how does it work exactly?

As it turns out, each BlockDirectory has a cursor , which is reset to point at the beginning of the block list at the end of an eden or full GC cycle. Until it is reset, it can only move forward. The BlockDirectory will move the cursor forward , until it finds a block containing available cells, and allocate from it. If the cursor reaches the end of the list, it will attempt to steal a 16KB block from another BlockDirectory and allocate from it. If that also fails, it will allocate a new 16KB block from malloc and allocate from it.

I also mentioned that a BlockDirectory uses a free list to allocate from the currently active block m_currentBlock . It’s important to note that in the actual implementation of JSC, the cells in m_currentBlock does not respect the rule for isNew bit. Therefore, to check liveness, one either needs to do a special-case check to see if the cell is from m_currentBlock (for example, see HeapCell::isLive ), or, for the GC [24] , stop the mutator, destroy the free list (and populate isNew in the process), do whatever inspection, then rebuild the free list and resume the mutator. The latter is implemented by two functions named stopAllocating() and resumeAllocating() , which are automatically called whenever the world is stopped or resumed .

The motivation of allowing m_currentBlock to not respect the rule for isNew is (a tiny bit of) performance. Instead of manually setting isNew to true for every allocation, a block-level bit allocated (aggregated as a bitvector in BlockDirectory ) is used to indicate if a block is full of live objects. When the free list becomes empty (i.e., the block is fully allocated), we simply set allocated to true for this block. When querying cell liveness, we check this bit first and directly return true if it is set. The allocated bitvector is cleared at the end of each GC cycle , and since the global logical version for isNew is also bumped, this effectively clears all the isNew bits, just as we desired.

JSC’s design also supports the so-called constraint solver , which allows specification of implicit reference edges (i.e., edges not represented as pointer in the object). This is mainly used to support JavaScript interaction with DOM. This part is not covered in this post.

Weak references have multiple implementations in JSC. The general (but less efficient) implementation is WeakImpl , denoting a weak reference edge. The data structure managing them is WeakSet , and you can see it in every block footer , and in every PreciseAllocation GC header . However, JSC also employs more efficient specialized implementations to handle the weak map feature in JavaScript. The details are not covered in this post.

In JSC, objects may also have destructors. There are three ways destructors are run. First, when we begin allocating from a block, destructors of the dead cells are run. Second, the IncrementalSweeper periodically scans the blocks and runs destructors. Finally, when the VM shuts down, the lastChanceToFinalize() function is called to ensure that all destructors are run at that time. The details of lastChanceToFinalize() are not covered in this post.

JSC employs a conservative approach for pointers on the stack and in registers: the GC uses UNIX signals to suspend the mutator thread, so it can copy its stack contents and CPU register values to search for data that looks like pointers. However, it’s important to note that a UNIX signal is not used to suspend the execution of the mutator: the mutator always actively suspends itself at a safe point. This is critical, as otherwise it could be suspended at weird places, for example, in a HeapCell::isLive check after it has read isNew but before it has read isMarked , and then GC did isNew |= isMarked, isMarked = false , and boom. So it seems like the only reason to suspend the thread is for the GC to get the CPU register values, including the SP register value so the GC knows where the stack ends. It’s unclear to me if it’s possible to do so in a cooperative manner instead of using costly UNIX signals.

Acknowledgements

I thank Saam Barati from Apple’s JSC team for his enormous help on this blog post. Of course, any mistakes in this post are mine.

  • A brief stop-the-world pause is still required at the start and end of each GC cycle, and may be intentionally performed if the mutator thread (i.e.the thread running JavaScript code) is producing garbage too fast for the GC thread to keep up with. ↩︎
  • The actual allocation logic is implemented in LocalAllocator . Despite that in the code BlockDirectory is holding a linked list of LocalAllocator , (at time of writing, for the codebase version linked in this blog) the linked list always contains exactly one element, so the BlockDirectory and LocalAllocator is one-to-one and can be viewed as an integrated component. This relationship might change in the future, but it doesn’t matter for the purpose of this post anyway. ↩︎
  • Since the footer resides at the end of a 16KB block, and the block is also 16KB aligned, one can do a simple bit math from any object pointer to access the footer of the block it resides in. ↩︎
  • Similar to that per-cell information is aggregated and stored in the block footer, per-block information is aggregated as bitvectors and stored in BlockDirectory for fast lookup. Specifically, two bitvectors empty and canAllocateButNotEmpty track if a block is empty, or partially empty. The code is relatively confusing because the bitvectors are laid out in a non-standard way to make resizing easier, but conceptually it’s just one bitvector for each boolean per-block property. ↩︎
  • While seemingly straightforward, it is not straightforward at all (as you can see in the code). The free cells are marked free by the GC, and due to concurrency and performance optimization the logic becomes very tricky: we will revisit this later. ↩︎
  • In fact, it also attempts to steal blocks from other allocators, and the OS memory allocator may have some special requirements required for the VM, but we ignore those details for simplicity. ↩︎
  • In the current implementation, the list of sizes (byte) are 16, 32, 48, 64, 80, then 80 * 1.4 ^ n for n >= 1 up to about 8KB. Exponential growth guarantees that the overhead due to internal fragmentation is at most a fraction (in this case, 40%) of the total allocation size. ↩︎
  • An interesting implementation detail is that IsoSubspace and CompleteSubspace always return memory aligned to 16 bytes, but PreciseAllocation always return memory address that has reminder 8 module 16. This allows identifying whether an object is allocated by PreciseAllocation with a simple bit math. ↩︎
  • JSC has another small optimization here. Sometimes a IsoSubspace contains so few objects that it’s a waste to hold them using a 16KB memory page (the block size of BlockDirectory ). So the first few memory pages of IsoSubspace use the so-called “lower-tier”, which are smaller memory pages allocated by PreciseAllocation . In this post, we will ignore this design detail for simplicity. ↩︎
  • Memory of an IsoSubspace is only used by this IsoSubspace , never stolen by other allocators. As a result, a memory address in IsoSubspace can only be reused to allocate objects of the same type. So for any type A allocated by IsoSubspace , even if there is a use-after-free bug on type A , it is impossible to allocate A , free it, allocate type B at the same address, and exploit the bug to trick the VM into interpreting an integer field in B controlled by attacker as a pointer field in A . ↩︎
  • In some GC schemes, an eden object is required to survive two (instead of one) eden GCs to be considered in old space. The purpose of such design is to make sure that any old space object is at least one eden-GC-gap old. In contrast, in JSC’s design, an object created immediately before an eden collection will be considered to be in old space immediately, which then can only be reclaimed via a full GC. The performance difference between the two designs is unclear to me. I conjecture JSC chose its current design because it’s easier to make concurrent. ↩︎
  • There is one additional color Grey in the code . However, it turns out that White and Grey makes no difference (you can verify it by grepping all use of cellState and observe that the only comparison on cellState is checking if it is Black ). The comments explaining what the colors mean do not fully capture all the invariants. In my opinion JSC should really clean it up and update the comment, as it can easily cause confusion to readers who intend to understand the design. ↩︎
  • The bit is actually called isNewlyAllocated in the code. We shorten it to isNew for convenience in this post. ↩︎
  • Safe point is a terminology in GC. At a safe point , the heap and stack is in a coherent state understandable by the GC, so the GC can correctly trace out which objects are dead or live. ↩︎
  • For PreciseAllocation , all allocated objects are chained into a linked list, so we can traverse all objects (live or dead) easily. This is not efficient: we will explain the optimizations for CompleteSubspace later. ↩︎
  • Keep in mind that while this is true for now, as we add more optimizations to the design, this will no longer be true. ↩︎
  • Note that we push the old space object into the queue, not the eden object, because this pointer could have been overwritten at the start of the GC cycle, making the eden object potentially collectable. ↩︎
  • Also note that all objects dead before this GC cycle, i.e. the free cells of a block in CompleteSubspace , still have isNew = false and isMarked = false , as desired. ↩︎
  • Recall that under generational hypothesis, most objects die young. Therefore, that “all objects in an eden block are found dead during eden GC” is something completely plausible. ↩︎
  • In JSC, the version is stored in a uint32_t and they have a bunch of logic to handle the case that it overflows uint32_t . In my humble opinion, this is an overoptimization that results in very hard-to-test edge cases, especially in a concurrent setting. So we will ignore this complexity: one can easily avoid these by spending 8 more bytes per block footer to have uint64_t version number instead. ↩︎
  • Note that any number of eden GC cycles may have run between the last full GC cycle and the current full GC cycle, but eden GC does not bump mark version. So for any object born before the last GC cycle (no matter eden or full), the isMarked bit honestly reflects if it is live, and we will accept the bit as its mark version must be off-by-one. For objects born after the last GC cycle, it must have a latest isNew version, so we can know it’s alive through isNew . In both cases, the scheme correctly determines if an object is alive, just as desired. ↩︎
  • And probably not: first, true sharing and false sharing between GC and mutator can cause slowdowns. Second, as we have covered before, JSC uses a Time-Space Scheduler to prevent the mutator from allocating too fast while the GC is running. Specifically, the mutator will be intentionally suspended for at least 30% of the duration. So as long as the GC is running, the mutator suffers from an 30%-or-more “performance tax”. ↩︎
  • The real story is a bit more complicated. JSC actually reuse the same VM for different JavaScript scripts. However, at any moment, at most one of the script can be running. So technically, there are multiple mutually-exclusive mutator threads, but this doesn’t affect our GC story. ↩︎
  • The GC needs to inspect a lot of cells, and its logic is already complex enough, so having one less special-case branch is probably beneficial for both engineering and performance. ↩︎
  • Subscribe to Villages-News
  • Town Square Entertainment
  • Movie Reviews
  • Around Florida
  • Letters to the Editor
  • Classifieds
  • Frequently Asked Questions
  • Submit Photos
  • Submit News
  • Advertising
  • Terms of Service
  • Privacy Policy

Villages-News.com

  • Breaking News
  • Submit a Letter

The Villages issues information about trash collection on President’s Day

The Villages District Government has issued the following information about trash collection in The Villages on President’s Day:

If you live in Village Community Development Districts 1-11, there will be no changes to the sanitation collection schedule.

CDD 12 and the Villages of DeLuna and Hammock at Fenney

If you live in District 12 or the Villages of DeLuna and Hammock at Fenney there will be no changes to the sanitation collection schedule. District 13 (excluding Villages of DeLuna & Hammock at Fenney) or District 14

CDD 13 and CDD 14

If you live in District 13 (excluding the Villages of DeLuna and Hammock at Fenney) and District 14, there will be no changes to the sanitation collection schedule.

Lake County portion of The Villages

If you live in the unincorporated Lake County portion of The Villages (not including District 11 or 14), there will be no changes to the sanitation collection schedule.

Town of Lady Lake portion of The Villages

If you live in the Town of Lady Lake portion of The Villages, there will be no changes to the sanitation collection schedule.

Middleton Community Development District-A

If you live in Middleton Community Development District-A, there will be no changes to the sanitation collection schedule.

More Headlines

Residents urge cdd 7 supervisors to keep anonymous complaint system, party store closing at village crossroads shopping center, pair nabbed with methamphetamine near entrance to spanish springs, webster warns biden and ‘climate extremists’ are weakening american energy output, fishhawk recreation center and family pool will be closed, massey services names new regional manager for the villages, unlicensed driver from nicaragua arrested after driving too fast, seabreeze sports pool scheduled for long-term closing, enjoy food and fun this saturday at wildwood community center, troll controversy in the sweetgum villas, too many details reported about villager’s arrest for pornography, can the party of lincoln get anymore insane, leaving a troll sign was very immature, troll complained about birds in flower beds, beautiful sunrise over brownwood bridge in the villages, red cardinal posing for valentine’s day in the village of bradford, tricolored heron enjoying breakfast at turtle mound executive golf course, snowy egret and tricolored heron in the village of collier, stunning sunrise over hawkes bay executive golf course.

Villages-News.com

  • Ocala-News.com
  • Leesburg-News.com
  • Orlando-News.com

Our site uses cookies. By continuing to use our site, you are agreeing to our cookie privacy policy . Accept

  • Español – América Latina
  • Português – Brasil
  • Tiếng Việt
  • Chrome for Developers

WebAssembly Garbage Collection (WasmGC) now enabled by default in Chrome

Thomas Steiner

There are two types of programming languages: garbage-collected programming languages and programming languages that require manual memory management. Examples of the former, among many more, are Kotlin, PHP, or Java. Examples of the latter are C, C++, or Rust. As a general rule, higher-level programming languages are more likely to have garbage collection as a standard feature. In this blog post, the focus is on such garbage-collected programming languages and how they can be compiled to WebAssembly (Wasm). But what is garbage collection (often referred to as GC) to begin with?

Browser Support

Garbage collection

In simplified terms, the idea of garbage collection is the attempt to reclaim memory which was allocated by the program, but that is no longer referenced. Such memory is called garbage. There are many strategies for implementing garbage collection. One of them is reference counting where the objective is to count the number of references to objects in memory. When there are no more references to an object, it can be marked as no longer used and thus ready for garbage collection. PHP 's garbage collector uses reference counting , and using the Xdebug extension's xdebug_debug_zval() function allows you to peek under its hood. Consider the following PHP program.

The program assigns a random number casted to a string to a new variable called a . It then creates two new variables, b and c , and assigns them the value of a . After that, it reassigns b to the number 42 , and then unsets c . Finally, it sets the value of a to null . Annotating each step of the program with xdebug_debug_zval() , you can see the garbage collector's reference counter at work.

The above example will output the following logs, where you see how the number of references to the value of the variable a decreases after each step, which makes sense given the code sequence. (Your random number will be different of course.)

There are other challenges with garbage collection, like detecting cycles , but for this article, having a basic level of understanding of reference counting is enough.

Programming languages are implemented in other programming languages

It may feel like inception, but programming languages are implemented in other programming languages. For example, the PHP runtime is primarily implemented in C. You can check out the PHP source code on GitHub . PHP's garbage collection code is mainly located in the file zend_gc.c . Most developers will install PHP via the package manager of their operating system. But developers can also build PHP from the source code . For example, in a Linux environment, the steps ./buildconf && ./configure && make would build PHP for the Linux runtime. But this also means that the PHP runtime can be compiled for other runtimes, like, you guessed it, Wasm.

Traditional methods of porting languages to the Wasm runtime

Independently from the platform PHP is running on, PHP scripts are compiled into the same bytecode and run by the Zend Engine . The Zend Engine is a compiler and runtime environment for the PHP scripting language. It consists of the Zend Virtual Machine (VM), which is composed of the Zend Compiler and the Zend Executor. Languages like PHP that are implemented in other high-level languages like C commonly have optimizations that target specific architectures, such as Intel or ARM, and require a different backend for each architecture. In this context, Wasm represents a new architecture. If the VM has architecture-specific code, like just-in-time (JIT) or ahead-of-time (AOT) compilation, then the developer also implements a backend for JIT/AOT for the new architecture. This approach makes a lot of sense because often the main part of the codebase can just be recompiled for each new architecture.

Given how low-level Wasm is, it is natural to try the same approach there: Recompile the main VM code with its parser, library support, garbage collection, and optimizer to Wasm, and implement a JIT or AOT backend for Wasm if needed. This has been possible since the Wasm MVP, and it works well in practice in many cases. In fact, PHP compiled to Wasm is what powers the WordPress Playground . Learn more about the project in the article Build in-browser WordPress experiences with WordPress Playground and WebAssembly .

However, PHP Wasm runs in the browser in the context of the host language JavaScript. In Chrome, JavaScript and Wasm are run in V8 , Google's open source JavaScript engine that implements ECMAScript as specified in ECMA-262 . And, V8 already has a garbage collector . This means developers making use of, for example, PHP compiled to Wasm, end up shipping a garbage collector implementation of the ported language (PHP) to the browser that already has a garbage collector, which is as wasteful as it sounds. This is where WasmGC comes in.

The other problem of the old approach of letting Wasm modules build their own GC on top of Wasm's linear memory is that there's then no interaction between Wasm's own garbage collector and the built-on-top garbage collector of the compiled-to-Wasm language, which tends to cause problems like memory leaks and inefficient collection attempts. Letting Wasm modules reuse the existing built-in GC avoids these issues.

Porting programming languages to new runtimes with WasmGC

WasmGC is a proposal of the WebAssembly Community Group . The current Wasm MVP implementation is only capable of dealing with numbers, that is, integers and floats, in linear memory, and with the reference types proposal being shipped, Wasm can additionally hold on to external references. WasmGC now adds struct and array heap types, which means support for non-linear memory allocation. Each WasmGC object has a fixed type and structure, which makes it easy for VMs to generate efficient code to access their fields without the risk of deoptimizations that dynamic languages like JavaScript have. This proposal thereby adds efficient support for high-level managed languages to WebAssembly, via struct and array heap types that enable language compilers targeting Wasm to integrate with a garbage collector in the host VM. In simplified terms, this means that with WasmGC, porting a programming language to Wasm means the programming language's garbage collector no longer needs to be part of the port, but instead the existing garbage collector can be used.

To verify the real-world impact of this improvement, Chrome's Wasm team has compiled versions of the Fannkuch benchmark (which allocates data structures as it works) from C , Rust , and Java . The C and Rust binaries could be anywhere from 6.1 K to 9.6 K depending on the various compiler flags, while the Java version is much smaller at only 2.3 K ! C and Rust do not include a garbage collector, but they do still bundle malloc/free to manage memory, and the reason Java is smaller here is because it doesn't need to bundle any memory management code at all. This is just one specific example, but it shows that WasmGC binaries have the potential of being very small, and this is even before any significant work on optimizing for size.

Seeing a WasmGC-ported programming language in action

Kotlin wasm.

One of the first programming languages that has been ported to Wasm thanks to WasmGC is Kotlin in the form of Kotlin/Wasm . The demo , with source code courtesy of the Kotlin team, is shown in the following listing.

Now you may be wondering what the point is, since the Kotlin code above basically consists of the JavaScript OM APIs converted to Kotlin . It starts to make more sense in combination with Compose Multiplatform , which allows developers to build upon the UI they may already have created for their Android Kotlin apps. Check out an early exploration of this with the Kotlin/Wasm image viewer demo and explore its source code , likewise courtesy of the Kotlin team.

Dart and Flutter

The Dart and Flutter teams at Google are also preparing support for WasmGC. The Dart-to-Wasm compilation work is almost complete, and the team is working on tooling support for delivering Flutter web applications compiled to WebAssembly. You can read about the current state of the work in the Flutter documentation . The following demo is the Flutter WasmGC Preview .

Learn more about WasmGC

This blog post has barely scratched the surface and mostly provided a high-level overview of WasmGC. To learn more about the feature, check out these links:

  • A new way to bring garbage collected programming languages efficiently to WebAssembly
  • WasmGC Overview
  • WasmGC post-MVP

Acknowledgements

Hero image by Gary Chan on Unsplash . This article was reviewed by Matthias Liedtke , Adam Klein , Joshua Bell , Alon Zakai , Jakob Kummerow , Clemens Backes , Emanuel Ziegler , and Rachel Andrew .

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2023-10-31 UTC.

  • The Register
  • Blocks&Files
  • The Next Platform

DevClass

  • Development

WebAssembly advanced in 2023 – but .NET cannot yet use Wasm garbage collection

WebAssembly advanced in 2023 – but .NET cannot yet use Wasm garbage collection

WebAssembly added key features in 2023 and usage in the browser grew by 70 percent – but Microsoft has no current plan to use Wasm garbage collection for the .NET runtime.

Gerard Gallant, CIO at Dovico and author of WebAssembly in Action, has posted on the Uno site on the state of WebAssembly in 2023 and hopes for 2024. The Uno platform makes use of WebAssembly when running in the browser. “This past year was incredible for WebAssembly,” Gallant claims, thanks to major new features and growing adoption.

Regarding usage, Gallant points to a Google Chrome usage page , which tracks the number of page loads in Chrome that use specific features. When set to WebAssemblyInstantiation, it shows usage up from 1.95 percent on January 1st 2023 to 3.32 percent a year later – an increase of 70 percent, albeit from a small base. Given that WebAssembly is for applications rather than general web pages those numbers are significant.

safari garbage collection

As for features, perhaps the biggest is garbage collection, released in Chrome and Firefox by November 2023, though not yet in Apple’s Safari. Built-in garbage collection has the potential to enable smaller Wasm runtimes for languages that use it, such as Java, Kotlin, Dart (used by Flutter), and C#. The size of the compiled code is a big deal since it causes slow page loading.

Microsoft uses WebAssembly for its Blazor framework, and as noted in the documentation , this works by compiling C# to .NET assemblies and then downloading a Wasm version of the .NET runtime to the browser. “The size of the published app … is a critical performance factor for an app’s usability,” the documents state.

That makes using native Wasm garbage collection desirable, but Aleksey Kliger, a Principal Software Engineer on the .NET team, says on GitHub that “we will continue to monitor the evolution of the post-v1 WasmGC spec, but at this time we are not planning to adopt it.” 

The issue is that using the current spec would require either “altering the semantics of existing .NET code,” or limiting the use of various features, not only in user code but also in the C# base library. Wasm garbage collection works better with Google’s Dart.

Gallant does note that the recently released .NET 8 has a new JIT (just-in-time) compiler for WebAssembly informally called a Jiterpreter – because it creates WebAssembly code on the fly. The trade-off here is that Blazor WebAssembly can work either by AOT (ahead of time) compilation to WebAssembly, which performs well but with large file size, or by compiling to .NET IL (intermediate language) executed by an interpreter implemented in WebAssembly, which performs more slowly but is around half the size. The Jiterpreter adds partial JIT support so that Blazor WebAssembly that is not AOT compiled can get some performance benefits at runtime.

Other new features that came to WebAssembly in 2023 include Tail Calls , which optimize recursive code, and multiple memory blocks for modules, now supported in Chrome and behind a flag in Firefox.

DevClass is the news and analysis site covering modern software development issues, from the team behind the Continuous Lifecycle, Serverless Computing and MCubed conferences

Contact us: [email protected]

© Situation Publishing, 2018-2023

  • Terms & Conditions
  • Do not sell my personal information

safari garbage collection

The State of WebAssembly – 2022 and 2023

  • Gerard Gallant
  • Published January 30, 2023

In this article, I will look at the current state of WebAssembly (wasm). I’ll start by revisiting 2022 developments to see if any of my predictions came true and if there were any surprises. Then I’ll try to predict where I think things will go in 2023.

safari garbage collection

2022 In Review

In 2021, Safari surprised me with how much work went into catching up to the other browsers’ WebAssembly support. So how did Safari do over the past year?

Safari has continued to improve its WebAssembly support by implementing several bug fixes and improvements, but most of its visible work went into improving the browser in other areas. Although there weren’t prominent wasm features implemented this past year compared to what happened the year before, a lot is happening under the hood.

For example, in last year’s prediction, I thought  fixed-width SIMD  was something that Safari would implement in 2022 and round out browser support for it. Unfortunately, fixed-width SIMD didn’t make it into Safari in 2022, but it’s now complete and part of Technical Preview 161!

Update: Safari added support for WASM SIMD on March 28th, 2023. See  Safari 16.4 Support for WebAssembly fixed-width SIMD. How to use it with C# to learn more.

SIMD  (Single Instruction, Multiple Data) is a type of parallel processing that takes advantage of a CPU’s SIMD instructions to perform the same operation on multiple data points simultaneously. This process can result in significant performance gains for things like image processing. Because there are multiple types of SIMD, fixed-width 128-bit SIMD operations were chosen as a starting point.

Another area where Safari picked up the gauntlet in 2022 is with the  Tail Call  feature, which is also now in Technical Preview 161. This has been sitting behind a flag in the Chrome browser for some time now and couldn’t move forward until the specification reached phase 4  (standardization) . For the specification to move to phase 4, at least two web VMs need to support the feature, so another browser, Safari, in this case, needed to implement it.

Tail calls  are useful for compilation optimizations and certain forms of control flow, like recursive functions. The following article explains how tail calls work with recursive functions to prevent stack exhaustion:  https://leaningtech.com/fantastic-tail-calls-and-how-to-implement-the Note:

Exception Handling

Chrome and Safari added WebAssembly exception handling support in 2021 but it was still being worked on in Firefox by year-end. In May 2022, Firefox rounded out browser support for this feature allowing modules to include exceptions without worrying about the performance hit, or the increased code size, that existed with JavaScript’s use as a workaround.

Tooling was quick to jump on this feature. In 2022, .NET 7 and Uno Platform both added support for WebAssembly exception handling as well as SIMD. Uno even went a step further and added experimental support for WebAssembly threading.

Multiple Memories

Currently modules can only have one block of memory. If the module needs to communicate with JavaScript, or another module, by allowing direct reads and writes to its memory, it risks information being disclosed or accidentally corrupted. This proposal would allow a module to have multiple memory blocks where a module could do things like keep one block for internal data while exposing another block for the sharing of information. There are a number of other use cases defined here if you’re interested in learning more: https://github.com/WebAssembly/multi-memory/blob/main/proposals/multi-memory/Overview.md .

This was a feature that I was hoping for in 2022 because I see a lot of uses to having multiple blocks of memory. Unfortunately, I don’t see it being worked on by any of the browser makers at this time.

Garbage Collection

Garbage collection is needed by a number of managed languages. Because WebAssembly doesn’t yet have garbage collection, languages that need it had to either include their own garbage collector or compile their runtime to WebAssembly. Both approaches result in bigger download sizes and a slower startup time especially if it isn’t cached yet.

The garbage collection proposal for WebAssembly was created back in 2017 but it’s been at phase 1 ever since. Over 2022, I was pleasantly surprised to see this proposal move to phase 2 in February and then to phase 3 in October!

All browser makers are building in support and there are already efforts to adjust the compilers for several languages to take advantage of it.

WebAssembly 2.0

When WebAssembly was released as an MVP in 2017, it was considered 1.0. Since then a number of WebAssembly features, like the fixed-width SIMD proposal mentioned earlier, have been standardized.

In 2022, a version 2.0 of the WebAssembly specification was started. This is basically a way of saying that, if a runtime supports WebAssembly 2.0, it supports all of the standardized features that are a part of it. 

The details on the following page are a bit low-level but the list of proposals that make up the changes since 1.0 are listed at the bottom: https://webassembly.github.io/spec/core/appendix/changes.html

Containers & WebAssembly

Aside from the WebAssembly feature proposals and their implementations, this is an item I view as being a significant game changer: WebAssembly modules that can be run by your container engine and can run side-by-side with your existing containers!

Containerd is a container runtime that is used by several container engines like Docker and Kubernetes. A change was made to allow containerd to use a wasm shim and then have that shim use a wasm runtime to run WebAssembly modules.

With this change, you’ll now be able to leverage your existing container infrastructure to run WebAssembly modules. Wasm modules have no access to the underlying system by default bringing added security. Wasm images are also smaller and start faster than traditional images. Another advantage to this is that you’ll be able to use container repositories like Docker Hub to share your WebAssembly images.

The following article has some examples of running Wasm in Docker: https://docs.docker.com/desktop/wasm/

Microsoft has also released a preview of AKS that runs WebAssembly: https://learn.microsoft.com/en-us/azure/aks/use-wasi-node-pools  

JavaScript Outside the Browser

A lot of work has gone into JavaScript over the years to make it really fast in the browser but, like with many developers wanting their languages to run in the browser too, JavaScript developers also want to run their code outside the browser. 

When it comes to things like edge computing at scale, how quickly your JavaScript code starts running matters. With V8 isolates, startup time is now in the millisecond range but experimentation is happening to see if that can be improved. 

The Bytecode Alliance (the group working on the WebAssembly System Interface or WASI) has taken the Firefox JavaScript engine, SpiderMonkey, and compiled it to WASI. Then they used a clever technique by having a build step that runs the JavaScript code and then takes a snapshot of the initialized bytecode held in linear memory. That snapshot is stored in the data section of a module and all that pre-initialized data is just dropped back into linear memory as needed by the engine, speeding up initialization time dramatically (0.36 milliseconds or 13 times faster in their example). More details on this experimentation can be found here: https://bytecodealliance.org/articles/making-javascript-run-fast-on-webassembly

The SpiderMonkey engine being available as a WASI module means that JavaScript could potentially be embedded in even more systems and run just as fast as it would in a highly optimized web browser.

2023 Expectations

As of January 12th, fixed-width SIMD is in Safari as part of Technology Preview 161. This will round out browser support for this feature when Safari 16.4 is released in the next month or two.

Also included with the Safari Technology Preview 161 release is the tail call feature. With that, the WebAssembly tail call specification has already moved to phase 4.

Chrome was waiting on the specification to move to phase 4 before bringing tail calls out from behind a flag. Now that it has changed phases, we’re likely to see tail calls released in Chrome and Edge in the coming months.

Firefox is working on implementing tail calls and will hopefully release support in the near future to round out browser support.

Now that the garbage collection proposal has reached phase 3, all browser makers are building in support. 

What’s also exciting is seeing that several programming languages like Java, Kotlin, and Dart are already working towards taking advantage of this new feature.

I’m not sure if this is something that will be fully released by browsers in 2023 but I could see portions of it being released, or possibly having it behind a flag, so that toolchains can start playing around with it.

WebAssembly and Containers

With Docker’s WebAssembly support now in beta, and with providers like Azure AKS previewing support, I think that containers will help wasm use explode outside the browser. Being able to use existing infrastructure and services already in place, and the ability to share WebAssembly images in repositories like Docker Hub, are big advantages.

Other Possible Features in 2023

There are quite a few WebAssembly proposals in the works but some of them are already in the Chrome and Firefox browsers behind a flag so I’d think they’re high possibilities for release in 2023. These include: Memory 64, Relaxed SIMD, and Type reflection.

The JavaScript Promise Integration proposal sounds quite useful where the module can continue to be built to call functions in a synchronous manner but the browser would pause the module’s execution while it waits for an asynchronous call to complete. More information on this proposal can be found here: https://v8.dev/blog/jspi

Languages and Tooling Support

With many of the core WebAssembly features now cross browser and more advanced features joining the list like exception handling, tail calls, and garbage collection, I think we’re going to see more languages target WebAssembly and existing tooling improve.

WebAssembly is still a small amount at around 2% of Chrome users, but there has been a steady increase in its use on the web. Even at 2%, that’s still a pretty big amount given the size of the web. The web usage metrics can be found here (click the ‘Show all historical data’ checkbox) : https://chromestatus.com/metrics/feature/timeline/popularity/2237

There appears to be a lot of adoption of WebAssembly outside the browser in areas like serverless providers and streaming services like Disney and Amazon. I’m hopeful that we’ll also start to see more use within the browser in 2023.

About Uno Platform

For those new to the Uno Platform , it allows for creating pixel-perfect, single-source C# and XAML apps that run natively on Windows, iOS, Android, macOS, Linux and Web via WebAssembly. In addition, it offers Figma integration for design-development handoff and a set of extensions to bootstrap your projects. Uno Platform is free, open-source (Apache 2.0), and available on  GitHub .

In Conclusion

2022 didn’t really feel like it had a lot of movement as far as features being released go but it did feel like there were a lot of things coming into place for what’s to come.

Things like tail calls and garbage collection will allow languages to shrink their module sizes and run faster. Additional languages might even be able to target WebAssembly as a result.

I believe the ability to use WebAssembly side-by-side with your existing containers, as well as share WebAssembly images in container repositories, is going to help the ecosystem grow even more outside the browser.

Finally, projects like compiling SpiderMonkey to WASI could see even faster JavaScript execution outside the browser and the possibility of more places where your JavaScript code can run.

To upgrade to the latest release of Uno Platform, please update your packages to 4.7 via your Visual Studio NuGet package manager! If you are new to Uno Platform, following our official getting started guide is the best way to get started. (5 min to complete) Or, to get started building powerful, fast, and reliable WebAssembly applications with Uno Platform, follow our Introduction to WebAssembly for .NET Developers Tutorial. 

Related Posts

safari garbage collection

Uno Platform 4.7 – New Project Template, Performance Improvements and more

🕓 5 MIN Our first release of …

safari garbage collection

Uno Platform 4.4 – Wasm Threading+Exception Handling, Rich Animations, GamePad APIs and more

🕓 7 MIN Over 90 improvements in …

safari garbage collection

The State of WebAssembly – 2021 and 2022

🕓 9 MIN In this article, I’m …

safari garbage collection

Uno Platform 215 rue St-Jacques, Suite 500 Montréal QC, H2Y 1M6

  • Visual Studio
  • MAUI Embedding
  • Windows 10/11
  • iOS & Android
  • WebAssembly
  • Silverlight
  • Xamarin Forms
  • Uno Gallery
  • Uno Samples
  • Case Studies

Garbage collection

Memory management in JavaScript is performed automatically and invisibly to us. We create primitives, objects, functions… All that takes memory.

What happens when something is not needed any more? How does the JavaScript engine discover it and clean it up?

Reachability

The main concept of memory management in JavaScript is reachability .

Simply put, “reachable” values are those that are accessible or usable somehow. They are guaranteed to be stored in memory.

There’s a base set of inherently reachable values, that cannot be deleted for obvious reasons.

For instance:

  • The currently executing function, its local variables and parameters.
  • Other functions on the current chain of nested calls, their local variables and parameters.
  • Global variables.
  • (there are some other, internal ones as well)

These values are called roots .

Any other value is considered reachable if it’s reachable from a root by a reference or by a chain of references.

For instance, if there’s an object in a global variable, and that object has a property referencing another object, that object is considered reachable. And those that it references are also reachable. Detailed examples to follow.

There’s a background process in the JavaScript engine that is called garbage collector . It monitors all objects and removes those that have become unreachable.

A simple example

Here’s the simplest example:

Here the arrow depicts an object reference. The global variable "user" references the object {name: "John"} (we’ll call it John for brevity). The "name" property of John stores a primitive, so it’s painted inside the object.

If the value of user is overwritten, the reference is lost:

Now John becomes unreachable. There’s no way to access it, no references to it. Garbage collector will junk the data and free the memory.

Two references

Now let’s imagine we copied the reference from user to admin :

Now if we do the same:

…Then the object is still reachable via admin global variable, so it must stay in memory. If we overwrite admin too, then it can be removed.

Interlinked objects

Now a more complex example. The family:

Function marry “marries” two objects by giving them references to each other and returns a new object that contains them both.

The resulting memory structure:

As of now, all objects are reachable.

Now let’s remove two references:

It’s not enough to delete only one of these two references, because all objects would still be reachable.

But if we delete both, then we can see that John has no incoming reference any more:

Outgoing references do not matter. Only incoming ones can make an object reachable. So, John is now unreachable and will be removed from the memory with all its data that also became unaccessible.

After garbage collection:

Unreachable island

It is possible that the whole island of interlinked objects becomes unreachable and is removed from the memory.

The source object is the same as above. Then:

The in-memory picture becomes:

This example demonstrates how important the concept of reachability is.

It’s obvious that John and Ann are still linked, both have incoming references. But that’s not enough.

The former "family" object has been unlinked from the root, there’s no reference to it any more, so the whole island becomes unreachable and will be removed.

Internal algorithms

The basic garbage collection algorithm is called “mark-and-sweep”.

The following “garbage collection” steps are regularly performed:

  • The garbage collector takes roots and “marks” (remembers) them.
  • Then it visits and “marks” all references from them.
  • Then it visits marked objects and marks their references. All visited objects are remembered, so as not to visit the same object twice in the future.
  • …And so on until every reachable (from the roots) references are visited.
  • All objects except marked ones are removed.

For instance, let our object structure look like this:

We can clearly see an “unreachable island” to the right side. Now let’s see how “mark-and-sweep” garbage collector deals with it.

The first step marks the roots:

Then we follow their references and mark referenced objects:

…And continue to follow further references, while possible:

Now the objects that could not be visited in the process are considered unreachable and will be removed:

We can also imagine the process as spilling a huge bucket of paint from the roots, that flows through all references and marks all reachable objects. The unmarked ones are then removed.

That’s the concept of how garbage collection works. JavaScript engines apply many optimizations to make it run faster and not introduce any delays into the code execution.

Some of the optimizations:

  • Generational collection – objects are split into two sets: “new ones” and “old ones”. In typical code, many objects have a short life span: they appear, do their job and die fast, so it makes sense to track new objects and clear the memory from them if that’s the case. Those that survive for long enough, become “old” and are examined less often.
  • Incremental collection – if there are many objects, and we try to walk and mark the whole object set at once, it may take some time and introduce visible delays in the execution. So the engine splits the whole set of existing objects into multiple parts. And then clear these parts one after another. There are many small garbage collections instead of a total one. That requires some extra bookkeeping between them to track changes, but we get many tiny delays instead of a big one.
  • Idle-time collection – the garbage collector tries to run only while the CPU is idle, to reduce the possible effect on the execution.

There exist other optimizations and flavours of garbage collection algorithms. As much as I’d like to describe them here, I have to hold off, because different engines implement different tweaks and techniques. And, what’s even more important, things change as engines develop, so studying deeper “in advance”, without a real need is probably not worth that. Unless, of course, it is a matter of pure interest, then there will be some links for you below.

The main things to know:

  • Garbage collection is performed automatically. We cannot force or prevent it.
  • Objects are retained in memory while they are reachable.
  • Being referenced is not the same as being reachable (from a root): a pack of interlinked objects can become unreachable as a whole, as we’ve seen in the example above.

Modern engines implement advanced algorithms of garbage collection.

A general book “The Garbage Collection Handbook: The Art of Automatic Memory Management” (R. Jones et al) covers some of them.

If you are familiar with low-level programming, more detailed information about V8’s garbage collector is in the article A tour of V8: Garbage Collection .

The V8 blog also publishes articles about changes in memory management from time to time. Naturally, to learn more about garbage collection, you’d better prepare by learning about V8 internals in general and read the blog of Vyacheslav Egorov who worked as one of the V8 engineers. I’m saying: “V8”, because it is best covered by articles on the internet. For other engines, many approaches are similar, but garbage collection differs in many aspects.

In-depth knowledge of engines is good when you need low-level optimizations. It would be wise to plan that as the next step after you’re familiar with the language.

  • If you have suggestions what to improve - please submit a GitHub issue or a pull request instead of commenting.
  • If you can't understand something in the article – please elaborate.
  • To insert few words of code, use the <code> tag, for several lines – wrap them in <pre> tag, for more than 10 lines – use a sandbox ( plnkr , jsbin , codepen …)

Lesson navigation

  • © 2007—2024  Ilya Kantor
  • about the project
  • terms of usage
  • privacy policy
  • Skip to main content
  • Skip to search
  • Skip to select language
  • Sign up for free
  • English (US)

FinalizationRegistry

A FinalizationRegistry object lets you request a callback when a value is garbage-collected.

Description

FinalizationRegistry provides a way to request that a cleanup callback get called at some point when a value registered with the registry has been reclaimed (garbage-collected). (Cleanup callbacks are sometimes called finalizers .)

Note: Cleanup callbacks should not be used for essential program logic. See Notes on cleanup callbacks for details.

You create the registry passing in the callback:

Then you register any value you want a cleanup callback for by calling the register method, passing in the value and a held value for it:

The registry does not keep a strong reference to the value, as that would defeat the purpose (if the registry held it strongly, the value would never be reclaimed). In JavaScript, objects and non-registered symbols are garbage collectable, so they can be registered in a FinalizationRegistry object as the target or the token.

If target is reclaimed, your cleanup callback may be called at some point with the held value you provided for it ( "some value" in the above). The held value can be any value you like: a primitive or an object, even undefined . If the held value is an object, the registry keeps a strong reference to it (so it can pass it to your cleanup callback later).

If you might want to unregister a registered target value later, you pass a third value, which is the unregistration token you'll use later when calling the registry's unregister function to unregister the value. The registry only keeps a weak reference to the unregister token.

It's common to use the target value itself as the unregister token, which is just fine:

It doesn't have to be the same value, though; it can be a different one:

Avoid where possible

Correct use of FinalizationRegistry takes careful thought, and it's best avoided if possible. It's also important to avoid relying on any specific behaviors not guaranteed by the specification. When, how, and whether garbage collection occurs is down to the implementation of any given JavaScript engine. Any behavior you observe in one engine may be different in another engine, in another version of the same engine, or even in a slightly different situation with the same version of the same engine. Garbage collection is a hard problem that JavaScript engine implementers are constantly refining and improving their solutions to.

Here are some specific points included by the authors in the proposal that introduced FinalizationRegistry :

Garbage collectors are complicated. If an application or library depends on GC cleaning up a WeakRef or calling a finalizer [cleanup callback] in a timely, predictable manner, it's likely to be disappointed: the cleanup may happen much later than expected, or not at all. Sources of variability include: One object might be garbage-collected much sooner than another object, even if they become unreachable at the same time, e.g., due to generational collection. Garbage collection work can be split up over time using incremental and concurrent techniques. Various runtime heuristics can be used to balance memory usage, responsiveness. The JavaScript engine may hold references to things which look like they are unreachable (e.g., in closures, or inline caches). Different JavaScript engines may do these things differently, or the same engine may change its algorithms across versions. Complex factors may lead to objects being held alive for unexpected amounts of time, such as use with certain APIs.

Notes on cleanup callbacks

  • Developers shouldn't rely on cleanup callbacks for essential program logic. Cleanup callbacks may be useful for reducing memory usage across the course of a program, but are unlikely to be useful otherwise.
  • If your code has just registered a value to the registry, that target will not be reclaimed until the end of the current JavaScript job . See notes on WeakRefs for details.
  • A conforming JavaScript implementation, even one that does garbage collection, is not required to call cleanup callbacks. When and whether it does so is entirely down to the implementation of the JavaScript engine. When a registered object is reclaimed, any cleanup callbacks for it may be called then, or some time later, or not at all.
  • It's likely that major implementations will call cleanup callbacks at some point during execution, but those calls may be substantially after the related object was reclaimed. Furthermore, if there is an object registered in two registries, there is no guarantee that the two callbacks are called next to each other — one may be called and the other never called, or the other may be called much later.
  • When the JavaScript program shuts down entirely (for instance, closing a tab in a browser).
  • When the FinalizationRegistry instance itself is no longer reachable by JavaScript code.
  • If the target of a WeakRef is also in a FinalizationRegistry , the WeakRef 's target is cleared at the same time or before any cleanup callback associated with the registry is called; if your cleanup callback calls deref on a WeakRef for the object, it will receive undefined .

Constructor

Creates a new FinalizationRegistry object.

Instance properties

These properties are defined on FinalizationRegistry.prototype and shared by all FinalizationRegistry instances.

The constructor function that created the instance object. For FinalizationRegistry instances, the initial value is the FinalizationRegistry constructor.

The initial value of the @@toStringTag property is the string "FinalizationRegistry" . This property is used in Object.prototype.toString() .

Instance methods

Registers an object with the registry in order to get a cleanup callback when/if the object is garbage-collected.

Unregisters an object from the registry.

Creating a new registry

Registering objects for cleanup.

Then you register any objects you want a cleanup callback for by calling the register method, passing in the object and a held value for it:

Callbacks never called synchronously

No matter how much pressure you put on the garbage collector, the cleanup callback will never be called synchronously. The object may be reclaimed synchronously, but the callback will always be called sometime after the current job finishes:

However, if you allow a little break between each allocation, the callback may be called sooner:

There's no guarantee that the callback will be called sooner or if it will be called at all, but there's a possibility that the logged message has a counter value smaller than 5000.

Specifications

Browser compatibility.

BCD tables only load in the browser with JavaScript enabled. Enable JavaScript to view data.

Waste management firm WM beats profit estimates on strong prices

Reporting by Pratyush Thakur, Kannaki Deka in Bengaluru; Editing by Tasim Zahid

Our Standards: The Thomson Reuters Trust Principles. , opens new tab

A view of a Nvidia logo at their headquarters in Taipei

Nike plans to cut over 1,600 jobs - WSJ

Nike plans to reduce its workforce by about 2%, or more than 1,600 jobs, in a cost-cutting move, the Wall Street Journal reported on Thursday.

A customer hands a 50-Indian rupee note to an attendant at a fuel station in Ahmedabad

DEV Community

DEV Community

Codux profile image

Posted on Feb 23, 2023

Experiments with the JavaScript Garbage Collector

Memory leaks in web applications are widespread and notoriously difficult to debug. If we want to avoid them, it helps to understand how the garbage collector decides what objects can and cannot be collected. In this article we'll take a look at a few scenarios where its behavior might surprise you.

If you're unfamiliar with the basics of garbage collection, a good starting point would be A Crash Course in Memory Management by Lin Clark or Memory Management on MDN. Consider reading one of those before continuing.

Detecting Object Disposal

Recently I've learned that JavaScript provides a class called FinalizationRegistry that allows you to programmatically detect when an object is garbage-collected. It's available in all major web browsers and Node.js.

A basic usage example:

When the example() function returns, the object referenced by x is no longer reachable and can be disposed of.

Most likely, though, it won't be disposed immediately. The engine can decide to handle more important tasks first, or to wait for more objects to become unreachable and then dispose of them in bulk. But you can force garbage collection by clicking the little trash icon in the DevTools ➵ Memory tab. Node.js doesn't have a trash icon, but it provides a global gc() function when launched with the --expose-gc flag.

DevTools Memory Tab

With FinalizationRegistry in my bag of tools, I decided to examine a few scenarios where I wasn't sure how the garbage collector was going to behave. I encourage you to look at the examples below and make your own predictions about how they're going to behave.

Example 1: Nested Objects

Here, even though the variable x no longer exists after the example() function has returned, the object referenced by x is still being held by the globalThis.temp variable. z and y on the other hand can no longer be reached from the global object or the execution stack, and will be collected. If we now run globalThis.temp = undefined , the object previously known as x will be collected as well. No surprises here.

Example 2: Closures

In this example we can still reach x by calling globalThis.temp() . We can no longer reach z or y . But what's this, despite no longer being reachable, z and y are not getting collected.

A possible theory is that since z.x is a property lookup, the engine doesn't really know if it can replace the lookup with a direct reference to x . For example, what if x is a getter. So the engine is forced to keep the reference to z , and consequently to y . To test this theory, let's modify the example: globalThis.temp = () => { z; }; . Now there's clearly no way to reach z , but it's still not getting collected.

What I think is happening is that the garbage collector only pays attention to the fact that z is in the lexical scope of the closure assigned to temp , and doesn't look any further than that. Traversing the entire object graph and marking objects that are still "alive" is a performance-critical operation that needs to be fast. Even though the garbage collector could theoretically figure out that z is not used, that would be expensive. And not particularly useful, since your code doesn't typically contain variables that are just chilling in there.

Example 3: Eval

Here we can still reach x from the global scope by calling temp('x') . The engine cannot safely collect any objects within the lexical scope of eval . And it doesn't even try to analyze what arguments the eval receives. Even something innocent like globalThis.temp = () => eval(1) would prevent garbage collection.

What if eval is hiding behind an alias, e.g. globalThis.exec = eval ? Or what if it's used without being ever mentioned explicitly? E.g.:

Does it mean that every function call is a suspect, and nothing ever can be safely collected? Fortunately, no. JavaScript makes a distinction between direct and indirect eval . Only when you directly call eval(string) it will execute the code in the current lexical scope. But anything even a tiny bit less direct, such as eval?.(string) , will execute the code in the global scope, and it won't have access to the enclosing function's variables.

Example 4: DOM Elements

This example is somewhat similar to the first one, but it uses DOM elements instead of plain objects. Unlike plain objects, DOM elements have links to their parents and siblings. You can reach z through temp.parentElement , and y through temp.nextSibling . So all three elements will stay alive.

Now if we execute temp.remove() , y and z will be collected because x has been detached from its parent. But x will not be collected because it's still referenced by temp .

Example 5: Promises

Warning: this example is a more complex one, showcasing a scenario involving asynchronous operations and promises. Feel free to skip it, and jump to the summary below.

What happens to promises that are never resolved or rejected? Do they keep floating in memory with the entire chain of .then 's attached to them?

As a realistic example, here's a common anti-pattern in React projects:

If asyncOperation() never settles, what's going to happen to the effect function? Will it keep waiting for the promise even after the component has unmounted? Will it keep isMounted and setStatus alive?

Let's reduce this example to a more basic form that doesn't require React:

Previously we saw that the garbage collector doesn't try to perform any kind of sophisticated analysis, and merely follows pointers from object to object to determine their "liveness". So it might come as a surprise that in this case x is going to be collected!

Let's take a look at how this example might look when something is still holding a reference to the Promise resolve . In a real-world scenario this could be setTimeout() or fetch() .

Here globalThis keeps temp alive, which keeps resolve alive, which keeps .then(...) callback alive, which keeps x alive. As soon as we execute globalThis.temp = undefined , x can be collected. By the way, saving a reference to the promise itself wouldn't prevent x from being collected.

Going back to the React example: if something is still holding a reference to the Promise resolve , the effect and everything in its lexical scope will stay alive even after the component has unmounted. It will be collected when the promise settles, or when the garbage collector can no longer trace the path to the resolve and reject of the promise.

In conclusion

In this article we've taken a look at FinalizationRegistry and how it can be used to detect when objects are collected. We also saw that sometimes the garbage collector is unable to reclaim memory even when it would be safe to do so. Which is why it's helpful to be aware of what it can and cannot do.

It's worth noting that different JavaScript engines and even different versions of the same engine can have wildly different implementations of a garbage collector, and externally observable differences between those.

In fact, the ECMAScript specification doesn't even require implementations to have a garbage collector, let alone prescribe a certain behavior.

However, all of the examples above were verified to work the same in V8 (Chrome), JavaScriptCore (Safari), and Gecko (Firefox).

Top comments (3)

pic

Templates let you quickly answer FAQs or store snippets for re-use.

sonsurim profile image

  • Joined Mar 5, 2023

Hi! I'm Surim Son, a front-end developer At Korean FE Article Team. It was a very good article and it was very helpful. I am going to translate the article into Korean and post it on Korean FE Article mailing substack( kofearticle.substack.com/ ), is it okay? Korean FE Article is an activity that picks good FE articles every week, translates them into Korean, and shares them. It is not used commercially, and the source must be identified. Please think about it and reply. I'll wait! 

alisey profile image

  • Location Helsinki, Finland
  • Joined Feb 23, 2023

Hi Surim. Sure, you're welcome to translate it.

ianbromwich profile image

  • Pronouns He / Him
  • Joined Jul 25, 2018

Really interesting and well explained read. Thank you!

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink .

Hide child comments as well

For further actions, you may consider blocking this person and/or reporting abuse

marksmith991 profile image

MLOps Engineer And What You Need to Become One?

Mark Smith - Feb 12

abdulmuminyqn profile image

kbar-svelte-mini - ctrl+k menu for your Svelte website

Abdulmumin yaqeen - Feb 13

eigenscribe profile image

Hello World 👋🌎

Renzo Scriber - Feb 11

fish1 profile image

PNG can have better compression and quality than WebP

Jacob Enders - Feb 11

Once suspended, codux will not be able to comment or publish posts until their suspension is removed.

Once unsuspended, codux will be able to comment and publish posts again.

Once unpublished, all posts by codux will become hidden and only accessible to themselves.

If codux is not suspended, they can still re-publish their posts from their dashboard.

Once unpublished, this post will become invisible to the public and only accessible to Alexey Lebedev.

They can still re-publish the post if they are not suspended.

Thanks for keeping DEV Community safe. Here is what you can do to flag codux:

codux consistently posts content that violates DEV Community's code of conduct because it is harassing, offensive or spammy.

Unflagging codux will restore default visibility to their posts.

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Search code, repositories, users, issues, pull requests...

Provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications

Branch of the spec repo scoped to discussion of GC integration in WebAssembly

WebAssembly/gc

Folders and files, repository files navigation, gc proposal for webassembly.

This repository is a clone of github.com/WebAssembly/spec/ . It is meant for discussion, prototype specification and implementation of a proposal to add garbage collection (GC) support to WebAssembly.

See the overview for a high-level summary and rationale of the proposal. Note: the concrete details here are out of date.

See the MVP for an up-to-date overview of the concrete language extensions that are proposed for the first stage of GC support in Wasm.

See the Post-MVP for possible future extensions in later stages.

See the modified spec for the completed spec for the first-stage proposal described in MVP.md.

This repository is based on the function references proposal as a baseline and includes all respective changes.

Original README from upstream repository follows...

This repository holds a prototypical reference implementation for WebAssembly, which is currently serving as the official specification. Eventually, we expect to produce a specification either written in human-readable prose or in a formal specification language.

It also holds the WebAssembly testsuite, which tests numerous aspects of conformance to the spec.

View the work-in-progress spec at webassembly.github.io/spec .

At this time, the contents of this repository are under development and known to be "incomplet and inkorrect".

Participation is welcome. Discussions about new features, significant semantic changes, or any specification change likely to generate substantial discussion should take place in the WebAssembly design repository first, so that this spec repository can remain focused. And please follow the guidelines for contributing .

For citing WebAssembly in LaTeX, use this bibtex file .

Code of conduct

Contributors 154.

@rossberg

  • WebAssembly 87.4%
  • Python 4.9%
  • JavaScript 2.5%
  • Bikeshed 0.7%
  • Makefile 0.2%

cartoq-logo

Buying Advice

  • Owner explains why he used his brand new Tata Safari for garbage collection [Video] Owner explains ...

Owner explains why he used his brand new Tata Safari for garbage collection [Video]

Written By: Shantonil Nag Shantonil Nag, senior feature writer at Cartoq.com, channels his expertise and enthusiasm into insightful car reviews and test drives.

Published: Nov 17, 2021, 12:49 IST

Updated: Nov 17, 2021, 13:28 IST

Shantonil Nag

Shantonil Nag, senior feature writer at Cartoq.com, channels his expertise and enthusiasm into insightful car reviews and test drives. (Full bio)

safari garbage collection

We have already come across a recent incident involving a Tata Safari being used as a trash-collecting vehicle because the owner was entirely dissatisfied with the service support he was getting for an issue related to the braking system of his SUV. Here, we have come across another YouTube video, which explains the grievances faced by the owner of that Tata Safari in a detailed manner.

The YouTube video has been uploaded from the same channel named ‘ Harsh Vlogs ’, from which a video showcasing the Safari used as a trash-collecting vehicle on the roads of Alwar, Rajasthan was uploaded. In the video, the owner, whose name is kept hidden, says that he purchased a white-coloured Tata Safari XZA from Rajasthan on 25 May 2021. After driving some 5,000 odd kilometres, the disc brakes of the braking system of his Tata Safari got damaged while on a trip to Himachal Pradesh.

Also read: All New Tata Safari: Can a tall adult comfortably sleep in it? (Video)

The owner returned to his hometown in Rajasthan and drove back to Gurgaon, where he headed to an authorized service centre of Tata to get the damaged braking system repaired. He mentions that his vehicle was in the authorized service centre for 25 days. After getting the braking system repaired, the owner continued driving, during which he began feeling vibrations in the steering wheel and misalignment of the wheels of the SUV.

Owner explains why he used his brand new Tata Safari for garbage collection [Video]

After driving for an additional 200 km, the braking system of the Tata Safari suffered from the same damage to the braking system of his Tata Safari, which happened earlier. This time, he got the system rectified at the authorized service centre in his hometown of Alwar, where his SUV was for 15-20 days for the repairing job.

The same problem happened for the fourth time!

The issue related to the braking system broke for the third time after driving for some 200-250 km. The faults in the disc brakes of the vehicle appeared again, which he tried to get resolved at an authorized service centre in Jaipur. However, after driving for some 10,000 km, the faults in the braking system emerged for the fourth time, post which he took the SUV to another service centre in Jaipur, where his SUV was for 12-15 days. After getting the brake system rectified five times, the driver said that his SUV continued to throw the same issues related to the brakes and vibrations in the steering wheel. Eventually, the Tata Safari broke down entirely and refused to start while the owner was driving the SUV to a nearby town.

After repetitively suffering from the same issue which the authorized service centres of Tata Motors failed to address, the owner of the Tata Safari decided to make videos of his Safari used as a trash-collecting vehicle on the roads of Alwar. He is doing this to gain the attention of higher authorities of Tata Motors to get his issue resolved. He also says that if this issue is failed to get addressed, he would be moving to consumer court for proper legal action.

We have seen several examples of ‘lemon cars’ in the past from various carmakers, where an issue was failed to be rectified wholly even after multiple visits to the authorized service centres. In such cases, proper compensation from the carmaker or replacing the faulty car with a new vehicle can be more advisable solutions.

Also read: 3 Volkswagen Touareg 4X4 Diesel V6 luxury SUVs selling cheaper than 2020 Hyundai Creta

Kia Sonet 2024: Comparing Its Variants Priced Rs 13-15 Lakh for Family-focused Car Buyers

Kia Sonet 2024: Comparing Its Variants Priced Rs 13-15 Lakh for Family-focused Car Buyers

Feb 15, 2024

Honda Elevate vs Hyundai Verna: Comparing Their Variants Priced Rs 13-15 Lakh for Buyers Seeking Value for Money

Honda Elevate vs Hyundai Verna: Comparing Their Variants Priced Rs 13-15 Lakh for Buyers Seeking Value for Money

Kia Seltos 2023 vs Skoda Kushaq: Comparing Their Petrol Variants Priced Rs 13-15 Lakh for Performance Enthusiasts

Kia Seltos 2023 vs Skoda Kushaq: Comparing Their Petrol Variants Priced Rs 13-15 Lakh for Performance Enthusiasts

Latest stories.

Your Favourite Toyota Car Can Now Be 'Home-Delivered' To You

Your Favourite Toyota Car Can Now Be 'Home-Delivered' To You

Ather 450X Electric Scooter Owner Shares Experience After Completing 63,500 km [Video]

Ather 450X Electric Scooter Owner Shares Experience After Completing 63,500 km [Video]

polis: a collective blog about cities worldwide

  • Data Collection in the Moscow Metro

safari garbage collection

  • ►  August (1)
  • ►  July (2)
  • ►  June (1)
  • ►  May (1)
  • ►  April (1)
  • ►  March (2)
  • ►  February (2)
  • Territory and Transgression: An Interview with Stu...
  • Elizabeth Blackmar on Public Space
  • Housing Demolition and the Right to Place
  • ►  December (2)
  • ►  November (3)
  • ►  October (1)
  • ►  September (1)
  • ►  July (4)
  • ►  June (5)
  • ►  May (3)
  • ►  April (5)
  • ►  March (7)
  • ►  February (13)
  • ►  January (24)
  • ►  December (31)
  • ►  November (28)
  • ►  October (30)
  • ►  September (29)
  • ►  August (31)
  • ►  July (31)
  • ►  June (30)
  • ►  May (30)
  • ►  April (29)
  • ►  March (31)
  • ►  February (29)
  • ►  January (31)
  • ►  December (28)
  • ►  November (29)
  • ►  October (31)
  • ►  August (30)
  • ►  July (32)
  • ►  June (28)
  • ►  May (29)
  • ►  April (19)
  • ►  March (26)
  • ►  February (21)
  • ►  January (27)
  • ►  December (25)
  • ►  November (21)
  • ►  October (26)
  • ►  September (25)
  • ►  August (33)
  • ►  July (23)
  • ►  June (23)
  • ►  May (33)
  • ►  April (28)
  • ►  February (25)
  • ►  January (25)
  • ►  December (35)
  • ►  November (25)
  • ►  October (34)
  • ►  September (27)
  • ►  August (2)

Soft plastic recycling is back after the REDcycle collapse, but only in 12 supermarkets. Will it work this time?

Analysis Soft plastic recycling is back after the REDcycle collapse, but only in 12 supermarkets. Will it work this time?

Simon Mallender portrait 2023-11-16 11:11:00

After the memorable  collapse of Australia's largest soft plastic recycling program REDcycle  in late 2022, a new scheme is emerging. It's remarkably similar, albeit on a much smaller scale.

The trial underway in  12 Melbourne supermarkets  intends, once again, to provide customers with an in-store option for recycling "scrunchable" food packaging.

It's estimated Australia uses  more than 70 billion pieces  of soft plastic a year. Most of it still ends up in landfill or blows into streets and waterways,  polluting our rivers and oceans . So 12 stores won't cut it in the long term.

But starting small is a good idea. REDcycle collapsed under its own weight, stockpiling recyclable material with nowhere to go. The new scheme will feed new, purpose-built waste processing facilities so it has much better prospects.

What do we know about the new scheme?

Australia's  Soft Plastics Taskforce  is behind the new trial. The taskforce is a coalition of the three major supermarkets: Woolworths, Coles and Aldi. It was established in the wake of REDcycle's demise and is chaired by the federal government Department of Climate Change, Energy, the Environment and Water.

The taskforce assumed responsibility for roughly 11,000 tonnes of soft plastic, formerly managed by REDcycle, across 44 locations  across Australia .

Addressing the lack of soft plastics recycling infrastructure in Australia is a top priority. This is the main reason REDcycle was unable to process the mountains of soft plastics it had stored around the country.

Much like the original REDcycle scheme, the new small-scale trial in Victoria has identified  several potential end markets  for used soft plastic.  After treatment , it could become an additive for asphalt roads, a replacement for aggregate in concrete, or a material for making shopping trolleys and baskets.

To be a successful and lasting solution, the scheme must be cost-effective and suitably located, with established markets for the recycled products.

Why are soft plastics so difficult to recycle?

Recycling soft plastic packaging is  particularly challenging , for several reasons.

Plastic packaging is typically made from the petrochemicals  polyethylene or polypropylene , and often contains a mix of materials, including various types of plastics and additives for flexibility and durability. This blend of materials makes it difficult to separate and recycle effectively.

To make matters worse, soft plastics readily absorb residues from food, grease and other substances. This causes contamination,  reducing the quality of the recycled material .

There's also less demand for recycled soft plastics, compared to other plastics. Many manufacturers prefer using brand new or "virgin" plastics or recycled rigid plastics instead, such as  recycled polyethylene terephthalate (rPET) , leaving limited avenues for recycled soft plastics to find new uses.

Soft plastics can get tangled or stuck in machinery at recycling or waste-processing facilities, causing  inefficiencies and disruptions in the process .

Rows of soft plastics bundled up inside a warehouse

Finding local solutions

We need to make it economically viable to recycle low-value plastics such as soft plastic packaging. Placing recycling facilities closer to communities and transport can save money and reduce emissions. So local, decentralised, small-scale recycling or reprocessing infrastructure is the way to go.

Fit-for-purpose facilities can develop the specialised processing and manufacturing techniques needed to handle soft plastics. This takes care of the contamination problem and creates new options for developing recycled products.

Local recycling initiatives also foster community engagement and awareness. We need to encourage individuals to participate actively in recycling efforts, and foster local businesses focused on resource recovery. To this end, we are currently exploring innovative enterprise-based recycling solutions in remote First Nations communities in Queensland.

The high cost of cheap packaging

Soft plastics are lightweight, flexible and inexpensive to produce. This has made them popular choices for packaging. But this ignores the problems of disposal, including harm to nature and people. There  has to be a better way . 

Recycling soft plastic packaging does face numerous obstacles. These stem from complex composition, contamination risks, sorting and processing challenges, scarce recycling infrastructure and limited demand for the end product.

Tackling these challenges requires collaborative efforts from industry players, policymakers, consumers and researchers. We need to develop innovative local solutions and reduce consumption of single-use plastic.

Holding producers accountable for the end-of-life management of their products is paramount. In the meantime, local, decentralised recycling infrastructure offers a promising solution to improve the efficiency and sustainability of soft plastic recycling, while empowering communities to contribute to a circular economy.

The trial in Victoria raises hopes of a working solution for post-consumer soft plastic. This time they are starting on a small scale. That should make it easier to manage the volume of material available for recycling and avoid secret stockpiles. Ultimately this approach could see "micro-factories" cropping up across the country, turning what was once waste into viable, useful products.

Anya Phelan is a Senior Lecturer in Entrepreneurship & Innovation at Griffith University. This piece first appeared on The Conversation .

  • X (formerly Twitter)

Related Stories

Twelve months on from the redcycle collapse, what's happening to our soft plastics.

bales of redcycle plastics at replas site in ballarat

People wanting to recycle soft plastics left with few options as REDcycle replacement delayed

Simon Mallender portrait 2023-11-16 11:11:00

REDcycle founder says she had 'no doubt' stockpiled plastic would be recycled

A woman with glasses wearing a high-vis vest stands in front of plastic waste piles.

  • Consumer Goods
  • Environment
  • Environmental Impact
  • Plastics, Polymers and Rubber Manufacturing
  • Recycling and Waste Management
  • Supermarkets and Grocery Retailers

Help us build a new website!

Take our website survey., waste collection change for presidents day., holiday waste collection schedule, community services.

Senior programs, aging services, community centers, Global Lex, Family Care Center, sister cities and information for families with children

Mayor's office, city departments, councilmembers, council meetings, dispute resolution and boards/commissions

Infrastructure and streets

Includes traffic, LEXserv, trash, recycling, environmental information, parking and transportation

Jobs and contracts

City contracts, RFPs, bids and job opportunities with the city government and related agencies

Licensing, permits and development

Includes applications for starting a business, tax forms, building and zoning permits

Neighborhoods and housing

Neighborhood associations, grants, housing assistance and housing issues

Public safety

Includes police, fire, animal control, community corrections, crime, security and emergency preparedness

Parks, events, programs, tourism and activities

Presidents Day waste collection schedule

safari garbage collection

Lexington’s Division of Waste Management will change the collection schedule for the upcoming Presidents Day holiday.

No waste collection will occur on Monday, Feb. 19. Residents and businesses that normally receive curbside pickup on Mondays will be serviced on Wednesday, Feb. 21. Those with Monday service should place their carts out after 4 p.m. on Tuesday to ensure Wednesday collection. Cart collection will not be affected for those with Tuesday, Thursday and Friday pickup.

Businesses with dumpsters (trash and recycling) normally serviced by the city on Mondays will have pickup on Tuesday, Feb. 20. Businesses with dumpsters normally serviced on Tuesdays will have pickup on Wednesday, Feb. 21. Thursday and Friday dumpster collection schedules will not be affected.

The Haley Pike Waste Management Facility, the Lexington Recycle Center and the Electronics Recycle Center will be closed on Saturday, Feb. 17 and Monday, Feb. 19.

For more information, contact LexCall at 311 or (859) 425-2255.

To sign up for alerts about changes to your waste collection schedule, text your regular collection day (MONDAY, TUESDAY, THURSDAY or FRIDAY) to 888777 from your mobile phone or sign up for email alerts . 

You may also like...

Artist chosen to create permanent artwork at entrance to lexington detention center, regional climate action plan input opportunity.

Bourbon, Clark, Fayette, Jessamine, Scott and Woodford Counties, along with the municipal governments within those counties, are collaborating on a

Lexington’s improved ranking means lower flood insurance rates for residents

Suggestions or problems with this page?

  • No notification!

ScrapMonster

EVIS Resource LLC Lesnaya st., 8B, Elektrostal, Moscow, Russia

safari garbage collection

EVIS Resource LLC is a leading company that accepts, recycles, sorts and recycles scrap metal. We cooperate with individuals and legal entities on mutually beneficial terms.

• high and transparent prices for raw materials;

• Availability of own collection points with modern equipment;

• the use of accurate scales and a spectral analysis device for the evaluation of raw materials;

• departure to the territory of the seller and export of the party on their own;

• simple and convenient payment scheme (any options: cash or non-cash; payment on the spot, etc.);

• high professionalism of employees.

EVIS resource - profitable, easy, reliable with us!

Our company provides a range of services for the reception, sorting and removal of recyclables.

EVIS Resource LLC purchases cables and receives non-ferrous scrap in the city of Elektrostal, Shchelkovo, Zheleznodorozhny, Ramensky and other settlements of the Moscow Region.

Acceptance of scrap metal.

• At our specialized site, equipped with special certified equipment, scrap metal is received and scrap metal is handed over to organizations.

• For the convenience of our customers, convenient access roads and a terminal for loading and unloading operations are provided. On the territory of the site, the acceptance of scrap metal is accompanied by the weighing of vehicles on truck scales, as well as dosimetric control of imported scrap metal.

• Reception of scrap can take place with a more accurate sorting of the metal. Non-ferrous metal scrap is subjected to spectral radioisotope analysis using special metal analyzers that determine the chemical composition of scrap metal.

Scrap metal

Industrial scrap metal is a complex chemical structure of various groups of metal elements. If not disposed of properly, scrap metal enters into an undesirable and dangerous chemical reaction with the external environment. According to the licenses for the procurement, processing and sale of non-ferrous and ferrous scrap, the company EVIS Resource organized scrap metal collection points for scrap recycling, where the relevant technological processes are carried out. In these collection points, scrap metal is sorted for subsequent shipment for recycling.

Nonferrous metals.

The work profile of EVIS Resource is aimed at preserving the environment, preserving the health of the population and improving the environmental situation. Recycled non-ferrous metals and their reuse bring great savings to the resources of the state.

Being an important strategic raw material resource, non-ferrous metals are widely used in various areas of high technology in the country. Recycled and further recycled non-ferrous scrap helps to support the work of various branches of the domestic defense industry. Our company is ready to buy non-ferrous scrap metal in any quantity. We offer our cooperation to all interested individuals and legal entities

  • Sorts And Recycling Scrap Metal
  • CONTACT DETAILS
  • KEY CONTACTS
  • SHAREHOLDERS
  • SUBSIDIARIES
  • Mecikalski Metal Services LLC Wausau,United States
  • Quick Service Recycling Chicago,United States
  • Gilpatric Metal Recycling Bristol,United States
  • Fe Works LLC Rochester,United States
  • Janjos Enterprises Recycling McDermott,United States
  • TANZANIA AGRO EXPORTS Saint Petersburg,Russia
  • Metall Group Moscow region, Lyubertsy, Motyakovo,Moscow,Russia
  • Radiolom 22 Yeysk Yeysk, ,Krasnodar,Russia
  • LLC RostMetProm Rostov-on-Don,Rostov Oblast,Russia
  • LLC FELAN Yeysk,Krasnodar,Russia
  • Is this your company? Claim this listing
  • Wrong information? Submit corrections
  • Add to Favorites

Report Content

Are you sure to report this content?

Contact Company

Security Verification

Submit Corrections

Sign in to your ScrapMonster submit corrections.

Click here to register/sign in

Access all data & Live Scrap Prices

safari garbage collection

Full access to:

  • Scrap metal prices for North America, Asia and Europe
  • COMEX & HSFE Metal Prices
  • Base Metal & Steel Prices
  • Metal & Recycling Company profiles and key contacts
  • Unlimited access to scrap marketplace and trader contact data
  • Scrap yard profiles and contact data
  • Daily news and market updates from our newsroom and trusted sources
  • Company profile listing in ScrapMonster

$49 USD/MONTH

  • Scrap Metal Prices
  • North America
  • United States
  • Aluminum Scrap
  • Brass/Bronze
  • Copper Scrap
  • Cupro-Nickel
  • Electronics Scrap
  • Scrap Gold Prices (Hallmarked)
  • Scrap Gold Prices (Non-Hallmarked)
  • Stainless Steel Scrap
  • Steel Scrap
  • Metal Futures
  • Steel Prices
  • China Mainland
  • World Export Market
  • Metal Prices
  • Base Metals
  • Minor Metals
  • Ferroalloys
  • Refractories
  • Raw Materials
  • Scrap Yards
  • USA Scrap Yard Finder
  • Canada Scrap Yard Finder
  • Scrap Yard Prices
  • USA Scrap Yard Prices
  • Canada Scrap Yard Prices
  • Recycling News
  • Waste Recycling
  • Metal & Steel
  • Precious Metals
  • Scrap Marketplace
  • Sell Offers
  • Buy Requests
  • Companies Directory
  • Waste & Recycling
  • Coin Values
  • Metals Finish
  • Plastics Finish
  • Price Charts
  • Website Badges
  • Recycling Events
  • Press Release
  • Data & Market Reports
  • Subscriptions
  • Scrap Materials
  • Laws & Regulations
  • Advertise on SM
  • Authors/Contributors
  • Submit Articles to SM

Quick Search

Advanced search.

19th Edition of Global Conference on Catalysis, Chemical Engineering & Technology

Victor Mukhin

  • Scientific Program

Victor Mukhin, Speaker at Chemical Engineering Conferences

Title : Active carbons as nanoporous materials for solving of environmental problems

However, up to now, the main carriers of catalytic additives have been mineral sorbents: silica gels, alumogels. This is obviously due to the fact that they consist of pure homogeneous components SiO2 and Al2O3, respectively. It is generally known that impurities, especially the ash elements, are catalytic poisons that reduce the effectiveness of the catalyst. Therefore, carbon sorbents with 5-15% by weight of ash elements in their composition are not used in the above mentioned technologies. However, in such an important field as a gas-mask technique, carbon sorbents (active carbons) are carriers of catalytic additives, providing effective protection of a person against any types of potent poisonous substances (PPS). In ESPE “JSC "Neorganika" there has been developed the technology of unique ashless spherical carbon carrier-catalysts by the method of liquid forming of furfural copolymers with subsequent gas-vapor activation, brand PAC. Active carbons PAC have 100% qualitative characteristics of the three main properties of carbon sorbents: strength - 100%, the proportion of sorbing pores in the pore space – 100%, purity - 100% (ash content is close to zero). A particularly outstanding feature of active PAC carbons is their uniquely high mechanical compressive strength of 740 ± 40 MPa, which is 3-7 times larger than that of  such materials as granite, quartzite, electric coal, and is comparable to the value for cast iron - 400-1000 MPa. This allows the PAC to operate under severe conditions in moving and fluidized beds.  Obviously, it is time to actively develop catalysts based on PAC sorbents for oil refining, petrochemicals, gas processing and various technologies of organic synthesis.

Victor M. Mukhin was born in 1946 in the town of Orsk, Russia. In 1970 he graduated the Technological Institute in Leningrad. Victor M. Mukhin was directed to work to the scientific-industrial organization "Neorganika" (Elektrostal, Moscow region) where he is working during 47 years, at present as the head of the laboratory of carbon sorbents.     Victor M. Mukhin defended a Ph. D. thesis and a doctoral thesis at the Mendeleev University of Chemical Technology of Russia (in 1979 and 1997 accordingly). Professor of Mendeleev University of Chemical Technology of Russia. Scientific interests: production, investigation and application of active carbons, technological and ecological carbon-adsorptive processes, environmental protection, production of ecologically clean food.   

Quick Links

  • Conference Brochure
  • Tentative Program

Watsapp

IMAGES

  1. Safari Waste Basket

    safari garbage collection

  2. Custom African Safari Waste Basket

    safari garbage collection

  3. Safari Waste Basket

    safari garbage collection

  4. Hyena, The Garbage Collector of Safari

    safari garbage collection

  5. Aristo Open Top Safari Garbage Bucket at Rs 200 in Sonipat

    safari garbage collection

  6. Garbage collection africa hi-res stock photography and images

    safari garbage collection

COMMENTS

  1. javascript

    garbage-collection Share Follow edited Nov 25, 2021 at 23:29 asked Nov 24, 2021 at 21:17 ka8725 2,828 1 25 39 Why do you use the TS playground for pure js, which then has 45 errors? Also, i stopped, when you had a setInterval, don't store the return, have it call start every five seconds, which in turn sets another interval. - ASDFGerte

  2. Understanding Garbage Collection in JavaScriptCore From Scratch

    JavaScript relies on garbage collection (GC) to reclaim memory. Now, the mutator did not add o to remembered set (because t1 is white, not black), and t2 in GC is the old value in o.f instead of target, so GC did not push target into the queue. So the pointer link from o to target is missed in GC. This can result in target being wrongly reclaimed despite it is live.

  3. The State of WebAssembly

    In 2022, the garbage collection proposal advanced quickly to phases 2 and 3 after sitting in phase 1 since 2017. I saw a lot of work being put into this feature by the browser makers, and several languages were also adding support.

  4. The Villages issues information about trash collection on President's

    The Villages has issued the following information about trash collection in The Villages on President's Day. CDDs 1-11 . If you live in Village Community Development Districts 1-11, there will be no changes to the sanitation collection schedule. CDD 12 and the Villages of DeLuna and Hammock at Fenney

  5. WebAssembly Garbage Collection (WasmGC) now enabled by default in

    120 x Note: The present article covers the concepts behind WasmGC in high level. For an in-depth article on the topic, read A new way to bring garbage collected programming languages efficiently to WebAssembly on the V8 blog. Garbage collection

  6. WebAssembly advanced in 2023

    As for features, perhaps the biggest is garbage collection, released in Chrome and Firefox by November 2023, though not yet in Apple's Safari. Built-in garbage collection has the potential to enable smaller Wasm runtimes for languages that use it, such as Java, Kotlin, Dart (used by Flutter), and C#.

  7. The State of WebAssembly

    Gerard Gallant Published January 30, 2023 In this article, I will look at the current state of WebAssembly (wasm). I'll start by revisiting 2022 developments to see if any of my predictions came true and if there were any surprises. Then I'll try to predict where I think things will go in 2023. 2022 In Review

  8. Garbage collection

    October 14, 2022 Garbage collection Memory management in JavaScript is performed automatically and invisibly to us. We create primitives, objects, functions… All that takes memory. What happens when something is not needed any more? How does the JavaScript engine discover it and clean it up? Reachability

  9. FinalizationRegistry

    Garbage collection is a hard problem that JavaScript engine implementers are constantly refining and improving their solutions to. Here are some specific points included by the authors in the proposal that introduced FinalizationRegistry: Garbage collectors are complicated. If an application or library depends on GC cleaning up a WeakRef or ...

  10. Garbage Collection and the Browser

    With higher-level languages, such as JavaScript, this memory management is performed automatically by a construct known as a garbage collector. There are many techniques browsers have used to figure out what objects in memory are needed or not, and the one used by modern browsers is known as the Mark and Sweep algorithm.

  11. Waste management firm WM beats profit estimates on strong prices

    Waste management firm WM reported fourth-quarter profit beating Wall Street estimates on Monday, supported by higher prices and sustained demand for garbage collection and disposal.

  12. Experiments with the JavaScript Garbage Collector

    But you can force garbage collection by clicking the little trash icon in the DevTools Memory tab. Node.js doesn't have a trash icon, but it provides a global gc () function when launched with the --expose-gc flag. With FinalizationRegistry in my bag of tools, I decided to examine a few scenarios where I wasn't sure how the garbage collector ...

  13. GitHub

    It is meant for discussion, prototype specification and implementation of a proposal to add garbage collection (GC) support to WebAssembly. See the overview for a high-level summary and rationale of the proposal. Note: the concrete details here are out of date.

  14. Why Chrome Enabled WebAssembly Garbage Collection (WasmGC) By Default

    Sometime on Halloween, Steiner wrote that in Chrome, WebAssembly garbage collection is now enabled by default. But then he explored what this means for high-level programming languages (with their own built-in garbage collection) being compiled into WebAssembly: To verify the real-world impact of this improvement, Chrome's Wasm team has ...

  15. Owner explains why he used his brand new Tata Safari for garbage

    The YouTube video has been uploaded from the same channel named 'Harsh Vlogs', from which a video showcasing the Safari used as a trash-collecting vehicle on the roads of Alwar, Rajasthan was uploaded. In the video, the owner, whose name is kept hidden, says that he purchased a white-coloured Tata Safari XZA from Rajasthan on 25 May 2021.

  16. NSGarbageCollector

    You enable garbage collection (GC) by using the -fobjc-gc compiler option. This switch causes the generation of the write-barrier assignment primitives. You must use this option on your main application file and all others used by the application, including frameworks and bundles. Bundles are ignored if they are not GC-capable.

  17. Data Collection in the Moscow Metro

    Data Collection in the Moscow Metro. by Gulnaz Aksenova and Artur Shakhbazyan. In the words of John Holland, " the city is a pattern in time ." Yet actions within its boundaries leave traces. Whether crossing the street, making a phone call or entering the subway, our traces are retained in the city's memory. Arbatskaya Station, Moscow Metro.

  18. Soft plastic recycling is back after the REDcycle collapse, but only in

    After the memorable collapse of Australia's largest soft plastic recycling program REDcycle in late 2022, a new scheme is emerging. It's remarkably similar, albeit on a much smaller scale. The ...

  19. Presidents Day waste collection schedule

    Lexington's Division of Waste Management will change the collection schedule for the upcoming Presidents Day holiday. No waste collection will occur on Monday, Feb. 19. Residents and businesses that normally receive curbside pickup on Mondays will be serviced on Wednesday, Feb. 21. Those with Monday service should place their carts out after 4 p.m. on Tuesday to ensure Wednesday collection.

  20. Tsessor. Russia,Moscow,Elektrostal, Plastic Recycling Company

    The priorities of the Tsessor company are friendly service and proper waste disposal in accordance with Russian legislation. Cooperating with us, you are always in the black - after all, we not only do our job well, but also appreciate each client! We take out any amount of waste, adjusting to your work schedule.

  21. EVIS Resource LLC. Russia,Moscow,Elektrostal, Scrap Metal Recycling Company

    Scrap Metal Recycling Company in Russia,Moscow,Elektrostal, Lesnaya st., 8B . Waste Recycling Company. ... • Availability of own collection points with modern equipment; • the use of accurate scales and a spectral analysis device for the evaluation of raw materials;

  22. garbage collection

    This works very good on Chrome (106..5249.119) but on Safari 16 (17614.1.25.9.10, 17614) each time I run the analysis takes longer and longer. Both running on macOS. What's curious is that I must quit Safari to "reset" the processing time. I guess there's a memory leak?

  23. What garbage collection algorithms do all 5 major browsers use?

    For any decent garbage collector (not only mark-and-sweep), cutting connection 1 would be enough to release B (and C and D and the window). Allocation based on reference counting would fail to release B and D due to their cyclic references (B references D and D references B) but reference counting is not really garbage collection.

  24. Active carbons as nanoporous materials for solving of environmental

    Title : Active carbons as nanoporous materials for solving of environmental problems Abstract: However, up to now, the main carriers of catalytic additives have been mineral sorbents: silica gels, alumogels. This is obviously due to the fact that they consist of pure homogeneous components SiO2 and Al2O3, respectively.

  25. Hitchin recycling centre bans items with lithium batteries ...

    A recycling centre has stopped accepting items containing lithium batteries following a fire. Hertfordshire Fire and Rescue spent seven hours battling the blaze at the Nationwide Metal Recycling ...