(cont) details of releasing storage of fixture segments

This commit is contained in:
Fischlurch 2010-12-16 05:45:38 +01:00
parent 5f6cbdc5bc
commit 29df0984ec

View file

@ -1832,8 +1832,8 @@ The fixture is like a grid, where one dimension is given by the [[model ports|Mo
:Thus the exit nodes are keyed by ~Pipe-ID as well (and consequently have a distinct [[stream type|StreamType]]) -- each model port coresponds to {{{<number_of_segments>}}} separate exit nodes, but of course an exit node may be //mute.//
</pre>
</div>
<div title="FixtureStorage" modifier="Ichthyostega" modified="201012140505" created="201012140231" tags="Builder impl operational draft" changecount="7">
<pre>The Fixture datastructure acts as umbrella to hook up the elements of the render engine's processing nodes network (LowLevelModel).
<div title="FixtureStorage" modifier="Ichthyostega" modified="201012160307" created="201012140231" tags="Builder impl operational draft" changecount="25">
<pre>The Fixture data structure acts as umbrella to hook up the elements of the render engine's processing nodes network (LowLevelModel).
Each segment within the [[Segmentation]] of any timeline serves as ''extent'' or unit of memory management: it is built up completely during the corresponding build process and becomes immutable thereafter, finally to be discarded as a whole when superseded by a modified version of that segment (new build process) -- but only after all related render processes are known to be terminated.
Each segment owns an AllocationCluster, which in turn manages all the numerous small-sized objects comprising the render network implementing this segment -- thus the central question is when to //release the segment.//
@ -1844,18 +1844,40 @@ Each segment owns an AllocationCluster, which in turn manages all the numerous s
//Not sure yet.// Of course it would be the simplest approach. KISS.
Basically the concern is that each new render process had to access the shared counts of all segments it touches.
''Note'': {{{shared_ptr}}} is known to be implemented by a lock-free algorithm (yes it is, at least since boost 1.33. Don't believe what numerous FUD spreaders have written). Thus lock contention isn't a problem, but at least a memory barrier is involved (and if I judge GCC's internal documentation right, currently their barriers extend to //all// globally visible variables)
''Note'': {{{shared_ptr}}} is known to be implemented by a lock-free algorithm (yes it is, at least since boost 1.33. Don't believe what numerous FUD spreaders have written). Thus lock contention isn't a problem, but at least a memory barrier is involved (and if I&amp;nbsp;judge GCC's internal documentation right, currently their barriers extend to //all// globally visible variables)
!!question: alternatives?
There are. As the builder is known to be run again and again, no one forces us to deallocate as soon as we could. That's the classical argument exploited by any garbage collector too. Thus we could just note the fact that a render process had terminated and evaluate all those noted results on later occasion.
!!exploiting the frame-dispatch step
Irrespective of the dicision for or against ref-counting, it seems reasonable to make use of the //frame dispatch step,// which is necessary anyway. The idea is to give each render proces a //copy//&amp;nbsp; of an dispatcher table object -- basically just a list of time breaking points and a pointer to the relevant exit node. If we keep track of those dispather tables, add a back-link to identify the process and require the process in turn to deregister, we get a tracking of tainted processes for free.
Irrespective of the decision in favour or against ref-counting, it seems reasonable to make use of the //frame dispatch step,// which is necessary anyway. The idea is to give each render process a //copy//&amp;nbsp; of an dispatcher table object -- basically just a list of time breaking points and a pointer to the relevant exit node. If we keep track of those dispatcher tables, add a back-link to identify the process and require the process in turn to deregister, we get a tracking of tainted processes for free.
But the distinguishing question here is that regarding the impact of such an implementation. What would be the costs?
# creating individual dispatcher tables uses memory
!!assessment
But the primary question here is to judge the impact of such an implementation. What would be the costs?
# creating individual dispatcher tables trades memory for simplified parallelism
# the per-frame lookup is efficient and negligible compared with just building the render context (StateProxy) for that frame
# when a process terminates, we need to take out that dispatcher and do deregistration stuff for each touched segment (?)
!!!Estimations
* number of actually concurrent render processes is at or below 30
* depending on the degree of cleverness on behalf of the scheduler, the throughput of processes might be multiplied (dull means few processes)
* the total number of segments within the Fixture could range into several thousand
* but esp. playback is focussed, which makes a number of rather several hundred tainted segments more likely
&amp;rArr; we should try quickly to dispose of the working storage after render process termination and just retain a small notification record
&amp;rArr; so the frame dispatcher table should be allocated //within//&amp;nbsp; the process' working storage; moreover it should be tiled
&amp;rArr; we should spend a second thought about how actually to find and process the segments to be discarded
!!!identifying tainted and disposable segments
Above estimation hints at the necessity of frequently finding some 30 to 100 segments to be disposed, out of thousands, assumed the original reason for triggering the build process was a typically local change in the high-level model. We can only discard when all related processes are finished, but there is a larger number of segments as candidate for eviction. These candidates are rather easy to pinpoint -- they will be uncovered during a linear comparison pass prior to commiting the changed Fixture. Usually, the number of candidates will be low (localised changes), but global manipulations might invalidate thousands of segments.
* if we frequently pick the segments actually to disposed, there is the danger of performance degeneration when the number if segments is high
* the other question is if we can afford just to keep all of those candidates around, as all of them are bound to get discardable eventually
* and of course there is also the question how to detect //when// they're due.
;Model A
:use a logarithmic datastructure, e.g. a priority queue
;Model B
:keep all candidates around and track the tainted processes instead
</pre>
</div>
<div title="ForwardIterator" modifier="Ichthyostega" modified="200912190027" created="200910312114" tags="Concepts def spec" changecount="17">