This change allows to disentangle the usages of `lib::SeveralBuilder`,
so that at any time during the build process only a single instance is
actively populated, all in one row — and thus the required storage can
either be pre-allocated, or dynamically extended and shrinked (when
filling elements into the last `SeveralBuilder` currently activated)
By packaging into a λ-closure, the building of the actual `Port`
implementation objects (≙ `Turnout` instances) is delayed until the
very end of the build process, and then unloaded into yet another
`lib::Several` in one strike. Temporarily, those building functor
objects are „hidden“ in the current stack frame, as a new `NodeBuilder`
instance is dropped off with an adapted type parameter (embedding the
λ-type produced by the last nested `PortBuilder` invocation, while
inheriting from previous ones.
However, defining a special constructor to cause this »chaining«
poses some challenge (regarding overload resolution). Moreover,
since the actual processing function shall be embedded directly
(as opposed to wrapping it into a `std::function`), further problems
can arise when this function is given as a ''function reference''
Conduct in-depth analysis to handle a secondary, implementation-related
(and frankly quite challenging) concern regarding the placement of node
and port connectivity data in memory. The intention is for the low-level
model to use a custom data structure based on `lib::Several`, allowing for
flexible and compact arrangement of the connectivity descriptors within
tiled memory blocks, which can then later be discarded in bulk, whenever
a segment of the render graph is superseded. Yet since the generated
descriptors are heterogeneous and, due to virtual functions, can not be
trivially copied, the corresponding placement invocations on the
data builder API must not be mixed, but rather given in ordered strikes
and preceded by a dimensioning call to pre-reserve a bulk of storage
However, doing so directly would jeopardise the open and flexible nature
of the node builder API, thereby creating a dangerous coupling between
the implementation levels of the node graph and of prospective library
wrapper plug-ins in charge of controlling details of the graph layout.
The solution devised here entails a functional helper data structure
created temporarily within the builder API stack frames; the detailed
and local type information provided from within the library plug-in
can thereby be embedded into opaque builder functors, allowing to
delay the actual data generation up until the final builder step,
at which point the complete number and size requirements of
connectivity data is known and can be used for dimensioning.
This investigation was set off by a warning regarding an
unused argument in `SeveralBuilder`, using `AllocationPolicy::moveElem()`
This warning is correct and easy to fix, but (luckily) it brought my
attention to the fact that a `SeveralBuilder<Port>` can not grow dynamically,
which is somewhat mitigated by the default policy to pre-allocate several
elements, which would work to some degree but waste a lot of memory.
This points to a deeper problem with the implementation pattern used for
all those Builders: they create their product by-value, which must then
be moved into the intended target location.
And doing so is **extremely dangerous**, given that our very goal is to
build a complex data structure internally connected by direct references
and ideally also allocated with a high degree of memory locality.
Unfortunately I do not see any favourable alternative yet;
Ideally all products should be `NonCopyable` — but then, the builder
implementation scheme would become even more complicated and less intuitive
and additionally the client code would need to pre-declare the number of
expected Leads and Ports (not clear if this is even feasible)
...and as expected, this turns up quite some inconsistencies,
especially regarding usage of the »buffer types«.
Basically, the `PortBuilder` is responsible for the high-level functionality
and thus must ensure the nested `WiringBuilder` is addressed and parameterised
properly to connect all »slots« of the processing function.
- can use a helper function in the WiringBuilder to fill in connections
- but the actual buffer types passed over these connectinos are totally
unchecked at that level, and can not see yet how this danger can be
mitigated one level above, where the PortBuilder is used.
- it is still unclear what a »buffer type« actually means; it could
be the pointer type, but it could also imply a class or struct type
to be emplaced into the buffer, which is a special extension to the
`BufferProvider` protocol, yet seems to be used here rather to transport
specific data types required by the actual media handling library (e.g. FFmpeg)
__Analysis__: what kind of verifications are sensible to employ
to cover building, wiring and invocation of render nodes?
Notably, a test should cover requirements and observable functionality,
while ''avoiding direct hard coupling to implementation internals...''
__Draft__: the most simple node builder invocation conceivable...
* decision how to provide a default service for tests
while also allow for configuration of more specific services
* as starting point for the prototype: use the `TrackingHeapBlockProvider`
(simply because this is the only implementation available and tested)
Prototyping and analysis revealed that some aspects of the render node wiring
refers to effectively global services and can thus be taken out of the picture
by relying on classical ''Dependency Injection''
Consequently, `EngineCtx` needs a default implementation, which brings up
a simplistic fall-back version of those services in support for prototyping.
Moreover, dedicated lifecycle functionality must be provided to bring up
and shut down the actual service instances intended for operational use.
...need to pass a binding for the actual processing function
in a way that it acts as a ''prototype'' — since the `Feed`,
i.e. the ''Invocation Adapter'' must be generated for each
invocation anew within the current stack frame
(so to avoid spurious heap allocations)
...seems that the former is well suited to serve as detail builder
used internally by the latter to provide a simplified standard adaptation
for a given processing function.
The integration can be achieved to layer a specialised detail builder class
on top, which can be entered only by specifying the concrete function or lambda
to wrap for the processing; the further builder-API-functions to control
the wiring in detail become thus only accessible after the function type
is known; this allows to place the detail builder as member into the
enclosing port builder and thus to allocate everything within the current
stack frame invoking the builder chain.
...after having determined the several levels of prototyping
currently employed, an important step ahead could be achieved
by analysing the intended and implied usage context of this
builder scheme, while still assuming the simplifications
related to prototyping.
It can be assumed that
* the Level-2 builder object is ''somehow provided''
* the invocation happens from within a media-handling lib-plugin
* alongside with the desired `ProcAsset` spec, an `ExpectationContext`
will be provided, allowing to pass-through additional semantic tags
The implementation in the lib-plugin is then able to draw from specific
knowledge related to the **Domain Ontology** for ''especially for this library''
and provide the necessary wrappers and parameter mapping information.
⟹ the **Level-2**-builder should thus expose an API to
* set up a straight forward mapping, based on a given wrapper functor
to delegate to the actual library invocation
* allow optionally to override some of the input connections
* alternatively allow to use a complete `InvocationAdapter`,
including a `FeedManifold`, as provided directly by the library-plugin
...caused by personal circumstances
...attempt to understand the context I was working on
* Integration is driven by the `NodeLinkage_test`
* the near-term goal is to ''get any node built'' — simplified
* the outline of the `NodeBuilder` and `PortBuilder` is settled
* the task at hand is how to fill in the definition of a `Port`
* which in turn ''requires prototyping'' — to establish a kind of weaving-pattern
* the immediate next thing to do is to ''build an `InvocationAdapter` within the »test-ontology«''
...by relying on DI for some effectively global services, notably
the cache provider, the API for building and wiring render nodes
can be simplified to cover only the actual node connectivity
Doing so directly seems to be a better solution than to inject an OutputBufferProvider;
the latter will still be needed, yet will not be part of the regular weaving pattern,
but used directly at top-level to obtain the output `BuffHandle`, which is then
passed to the `Port::weave()` call
...still not convinced that this is a good design,
since it seems to subvert the general design to treat one special case.
However, I can't see a good way to address this special case directly
There might be one specific output result buffer at top level
for each invocation, which must be delivered into a prepared
output sink. This amounts to one special case, cross-cutting
an otherwise completely generic data flow scheme.
After considering several solutions, it seems most straight-forward
to configure a specific `OutputBufferProvider` to serve as a proxy for
the `OutputSlot` / `DataSink` provided at top-level to the Render-Job.
As an asside, this analyis reveals that the result-slot number does
not belong into the `FeedManifold`, which is dynamic (on the stack);
rather, it's a fixed value configured as part of the `WeavingPattern`
Code clean-up: mark all buffers with a dedicated tagging type
The point in question is: if we work the LocalTag into the type-hash,
could it be possible to miss an existing entry in the metadata registry?
This could cause two entries to be locked for a single buffer address,
leading to data corruption.
As far as I can see, in the current usage this would not happen,
but unfortunately this problem can not be ruled out, since the BufferProvider
API and protocol is designed to be open for various usage patterns.
However, the same potentially disastrous pattern could also materialise
when registering two different buffer types, and then locking each
for the same buffer location.
...this seems to be a tricky aspect; we use hash-chaining to create
derived entries, which may cause the identity of an entry to depend
on the order of specialisation. Looked through the possible code paths,
but these seem to be quite complicated; I see the lurking danger of
creating a second entry (with a different hash), and then in worst case
even locking/unlocking a given buffer twice....
...this is a surprisingly tricky issue, since it undercuts the
generic and recursive implementation of buffer handling;
fortunately I've foreseen such demands may arise down the road
and I've reserved an »Local Key« (now renamed into `LocalTag`),
whose meaning is implementation defined and interpreted by
the specific `BufferProvider`
Requirement analysis shows that the ''actual buffer provider'' to use
constitutes yet another independent degree of freedom, which conceivably
must be handled by the Builder internals rather than by the Domain Ontology.
Thus the simple solution to use a `BuffDescr` to mark the type must be augmented
to also allow configuration of the underlying `BufferProvider`, which generates
the descriptor and can later be invoked with this descriptor to ''lock an actual Buffer.''
In some cases, setup of the buffer types could even be more complicated and require
access to the actual (runtime) invocaton context; such extreme cases however
could be rendered as an extension of the scheme established here,
by storing the (up to now transient) constructor functors persistently.
Which leads to the decision not to care for those extremely complicated
corner cases right now, and thus to construct all buffer descriptors
in the `build()` call
...still fighting to find a suitable API to define
how inputs and outputs are connected and mapped to function parameters.
The solution drafted here uses the reshaped `DataBuilder` (≙`lib::SeveralBuilder`)
to add up connections for each »slot«, disregarding the possibility of permutations.
Similar to `NodeBuilder`, a policy template is used to pass down the setup
for an actual custom allocator.
After applying all the preceding refactorings, it turns out that
the `DataBuilder` defined here ''is essentially `lib::SeveralBuilder`'',
only with a different arrangement of the type parameters, due to the
specific usage context here.
It is thus possible to replace all the interim / helper / rebinding templates
by simple templated typedefs. The only tangible difference is that for
usage in the Builder, a ''selector policy'' is passed as a simple type argument,
which in practice wires the concrete allocator information down into each
sub-builder created during the ongoing construction of a node structure.
redefine the policy for `lib::SeveralBuilder` to be a template-template parameter.
In fact it should have been this way from start, yet defining this kind of
very elaborate code bottom-up lets you sometime miss the wood for the trees
So to restate: `lib::SeveralBuilder` takes a ''policy template,''
which then in turn will be instantiated with the same types `I` (interface)
and `E` (element type) used on `SeveralBuilder` itself. Obviously, there can be
further types involved and thus additional type parameters may be necessary,
notably the ''Allocator'' — yet these are better injected when ''defining''
the policy template itself.
The default binding for this policy template is defined as `allo::HeapOwn`,
which causes the builder to allocate the storage extents through the standard
heap allocator, and for the created `lib::Several` to take full ownership of
embedded objects, invoking their destructors when falling out of scope.
As a direct consequence of the insights regarding Dependency-Injection,
a ''Builder Toolkit'' is required, which can be used to adapt various
kinds of ''Weaving Patterns'' — since obviously it is not possible to
settle down on a single Pattern, and thus several ''families of builders''
will emerge, one for each ''line of construction'' for ''Weaving Patterns''.
To stress this point, what I am coding here is a prototype, aimed at
being used as part of a **Test Domain Ontology** — and other Domain Ontologies
(e.g. für FFmpeg) will certainly require other construction schemes
for their Weaving Patterns. So this is an open field, and can not be
settled once and for all.
This immediately leads to another, rather technical problem:
If we're about to work with ''delegate Builders,'' then also
a way to pass-down the allocator configuration is required.
We had settled on a preliminary solution with the helper `DataBuilder`,
yet this solution looks like it defines how `lib::SeveralBuilder`
should be used in most of the cases. So there is now a conflict
between the existing definition scheme for `lib::SeveralBuilder`,
which was achieved in a bottom-up way, and a slightly different
definition scheme ''as it should be''
Starting to attack this latter detail problem, as a first step,
the definition of `DataBuilder` can be simplified by collapsing
it with the `lib::allo::SetupSeveral`
It became clear that a secondary system of connections must be added,
running top-down from a global model context, and thus contrary to the
regular orientation of the node network, which connects upwards from
predecessor to successor, in accordance with the pull principle.
If we accept this wiring as part of the primary structure, it can be
established immediately while building the nodes, thus adding a preconfigured
''pattern of Buffer Descriptors'' to each node, since there is no further
''moving part'' — beyond the wiring to the `BufferProvider`, which thus
becomes part of a global `ModelContext`
As an immediate consequence, the storage for this configuraion should
also be switched to `lib::Several` and handled similar to the primary
node wiring in the Builder...
It seems we need a `WeavingPattern`-Builder, which obviously
must be rather flexible, since those patterns are to be composed
from several layers, which should be extensible within a given ''Domain Ontology''
So this seems to lead to a builder-DSL which creates »**onion layers**«
of builders, with the ability to extend and specialise the type on each layer.
''As it will be quite challenging to get this into usable shape,
it seems best to approach this step by step through prototyping...''
Not entirely sure how to use the `emit()` call properly,
assuming that it means that data is complete in buffer,
but can still be read after that point
* at least for a simple, prototypical setup
* and actually shifting the onerous into the Level-1 builder \\
''(which is precisely the intention here)''
The deeper problem is that we must not engage into any premature decisions
regarding the structure or layout of the actual processing function invocation.
Thus attempting to create a kind of »firewall« of sorts, by connecting
the building blocks strictly through template parameter and preferably
figuring out any detailed knowledge locally, through ''compile-time introspection...''
...even the initial effort to stub its operation turns into a
challenge, since honestly there is near nothing we can assume safely,
without sliding into uncovered provisions regarding the ''Domain Ontology''
- it is clear that this adaptor will be a ''Concept''
- yet it must in some way access the `FeedManifold` and also control additional storage
- a rather obvious solution is to layer it ''on top'' of the manifold
...which brings about various (preliminary) decisions regarding
Metadata storage in the `Turnout`-object, which acts as a guidance
and specification for the actual invocation for this specific node.
As starting point, I choose the ''KISS'' solution of embedding some
blocks of `UninitialisedStorage` directly into the `Turnout`; obviously
these blocks must be oversized, since we can not effort emitting a
dedicated template instance for each different count of input / output
feeds. Moreover, these data buffers are assumed to be filled with
valid objects by the builder ''(this is a lurking danger)''
...turns out that the intended structure is still too fine grained
and explicit and many operational steps can be collapsed into a single
virtual scope, wherein they can be deemed implementation detail...
...so the solution is to build up the working data as `lib::SeveralBuilder`;
however, a more concise notation can be achieved with a suitably configured
wrapping subclass; together with the cross-builder trick, this allows
to write the allocation configuration in a clearly libelled way,
while the field definition and the builder constructor hides the
complexities of picking up the extension point and passing on the
wiring to the allocator instance.
...turns out to be surprisingly tricky, since the nested
lib::SeveralBuilder instances require parametrisation by a
''policy template,'' which in turn relies on the actual allocator.
And we want to provide the allocator as a constructor parameter,
including the ability to pick up a custom specialisation for
some specific allocator (notably AllocationCluster requires
to hook into this kind of extension point, to be able to
employ its dedicated API for dynamic allocation adjustment)
* conduct analysis regarding allocator handling in the Builder
* turns out we'll have to keep around two different allocators while building
* ⟹ establish the goal to confine usage of the Node allocator to the lower Levels
* consequently must open up the `lib::SeveralBuilder` to be usable
as an intermediary data structure, while building up the target data
* in the initial design, the `SeveralBuilder` was kept opaque, since
contents can be expected to be re-located frequently and thus exposing
elements and taking references could be dangerous — yet this is also
true for `std::vector` however, so people are assumed to know
when they want to shoot themselves into their own foot
...especially what is necessary to represent at this level and what information
is implicit; notably there will be an implicit default wiring, but we allow
for case-by-case deviations
The Builder will have to perform several passes, gradually refining
the model into the low-level Render Node network. Right now, some
guesses regarding the last steps of this process are possible,
thus defining the lowest level of a model builder structure
* Level-3 : mapping data flow paths
* Level-2 : detailed configuration of data buffer passing
* Level-1 : build the actual parameter structures for invocation
In the current »Vertical Slice« we're able to fully define Level-1
and maybe Level-2
To escape a possible deadlock in analysis, I resort to developing
some kind of free-wheeling presupposition how the **Builder** could
be implemented — a centrepiece of the Lumiera architecture envisioned
thus far — which ''unfortunately'' can only be planned and developed
in a more solid way ''after'' the current »Vertical Slice« is completed.
Thus I find myself in the uncomfortable situation of having to work towards
a core piece, which can not yet be built, since it relies heavily on
the very structures to be built...
...the complexity of details is a nightmare
...still fighting to grasp a generic structure allowing to ''fold down''
the details into the specific ''domain ontologies'' for the media libraries
...and this line of analysis brings us deep into the ''Buffer Provider''
concept developed in 2012 — which appears to be very well to the point
and stands the test of time.
Adding some ''variadic arguments'' at the right place surprisingly leads
to an ''extension point'' — which in turn directly taps into the
still quite uncharted territory interfacing to a **Domain Ontology**;
the latter is assumed to define how to deal with entities and relationships
defined by some media handling library like e.g. FFmpeg.
So what we're set to do here is actually ''ontology mapping....''
The immediate next step is to build some render nodes directly
in a test setting, without using any kind of ''node factory.''
Getting ahead with this task requires to identify the constituents
to be represented on the first code layer for the reworked code
(here ''first layer'' means any part that are ''not'' supplied
by generic, templated building blocks).
Notably we need to build a descriptor for the `FeedManifold` —
which in turn implies we have to decide on some fundamental aspects
of handling buffers in the render process.
To allow rework of the `ProcNode` connectivity, a lot of presumably obsoleted
draft code from 2011 has to be detached, to be able to keep it in-tree
for further reference (until the rework and refactoring is settled).
As outlined in #1367, the integration effort requires some rework
of existing code, which will be driven ahead by the `NodeLinkage_test`
* redefine Node Connectivity
* build simple `ProcNode` directly in scope
* create an `TurnoutSystem` instance
* perform a ''dummy Node-Invocation''
As a replacement for the `RefArray` a new generic container
has been implemented and tested, in interplay with `AllocationCluster`
* the front-end container `lib::Several<I>` exposes only a reference
to the ''interface type'' `I`, while hiding any storage details
* data can only be populated through the `lib::SeveralBuilder`
* a lot of flexibility is allowed for the actual element data types
* element storage is maintained in a storage extent, managed through
a custom allocator (defaulting to `std::allocator` ⟹ heap storage)
The `SeveralBuilder` employs the same tactic as `std::vector`,
by over-allocating a reserve buffer, which grows in exponential
increments, to amortise better the costs of re-allocation.
This tactic does not play well with space limited allocators
like `AllocationCluster` however; it is thus necessary to provide
an extension point where the actuall allocator's limitation can be
queried, allowing to use what is available as reserve, but not more.
With these adaptations, a full usage cycle backed by `AllocationCluster`
can be demonstrated, including variations of dynamic allocation adjustment.
...identified as part of bug investigation
* make clear that reserve() prepares for an absolute capacity
* clarify that, to the contrary, ensureStorageCapaciy() means the delta
Moreover, it turns out that the assertion regarding storage limits
triggers frequently while writing the test code; so we can conclude
that the `AllocationCluster` interface lures into allocating without
previous check. Consequently, this check now throws a runtime exception.
As an aside, the size limitation should be accessible on the interface,
similar to `std::vector::max_size()`
By means of the extension point, which produces a dedicated policy
for use with `AllocationCluster`, it becomes possible to use the
specialised API to adjust the latest allocation in the cluster.
When this is not actually usable, the policy will fall back
on the standard implementation (which is wasteful when
applied to `AllocationCluster`, since memory for the
obsoleted, smaller blocks not de-allocated then...
- decided to allow creating empty lib::Several;
no need to be overly rigid in this point,
since it is move-assignable anyway...
- populate with enough elements to provoke several reallocations
with copying over the existing elements
- precisely calculate and verify the expected allocation size
- verify the use-count due to dedicated allocator instances
being embedded into both the builder and hidden in the deleter
- move-assign data
- all checksums go to zero at end
The setup for `ArrayBucket` is special, insofar it shell de-allocate itself,
which creates the danger of re-entrant calls, or to the contrary, the danger
to invoke this clean-up function without actually invoking the destructor.
These problems become relevant once the destructor function itself is statefull,
as is the case when embedding a non-trivial, instance bound allocator
to be used for the clean-up work. Using the new `lib::TrackingAllocator`
highlighted this potential problem, since the allocator maintains a use-count.
Thus I decided to move the »destruction mechanics« one level down into
a dedicated and well encapsulated base class; invoking ArrayBucket's destructor
thereby becomes the only way to trigger the clean-up, and even ElementFactory::destroy()
can now safely check if the destructor was already invoked, and otherwise
re-invoke itself through this embedded destructor function. Moreover,
as an additional safety measure, the actual destructor function is now
moved into the local stack frame of the object's destructor call, removing
any possibility for the de-allocation to interfere with the destructor
invokation itself
part of the observed deviation stems form bugs in logging and checksum calculation;
but there seems to be a real problem hidden in the allocator usage of the
new component, since the use-cnt of the handle does not drop to zero
While there might be the possibility to use the magic of the standard library,
it seems prudent rather to handle this insidious problem explicitly,
to make clear what is going on here.
To allow for such explicit alignment handling, I have now changed the
scheme of the storage definition; the actual buffer now starts ''behind''
the `ArrayBucket<I>` object, which thereby becomes a metadata managing header.
__To summarise the problem__: since we are maintaining a dynamically sized buffer,
and since we do not want to expose the actual element type through the
front-end object, we're necessarily bound to perform a raw-memory allocation.
This is denoted in bytes, and thus the allocator can no longer manage
the proper alignment automatically. Rather, we get a storage buffer with
just ''some accidental'' alignment, and we must care to request a sufficient
overhead to be able to shift the actual storage area forward to the next
proper alignment boundary. Obviously this also implies that we must
store this individual padding adjustment somewhere in the metadata,
in order to be able to report the correct size of the block later
on de-allocation.
The solution implemented thus far turns out to be not sufficient
for ''over-aligned-data'', as the raw-allocator can not perform the
''magic work'' because we're exposing only `std::byte` data.
This adaptor works in concert with the generic allocator
building blocks (prospective ''Concepts'') and automatically
registers a either static or dynamic back-link to the factory
for clean-up.
Use this wrapper fore more in-depth test of the new `TrackingAllocator`
and verify proper behaviour through the `EventLog`
- create two vectors, attached to the `TrackingAllocator`
- emplace Tracker-Objects
- move an object to the other vector
- destroy the containers
🠲 Event-Log looks plausible!
- use a meta-registry of pools
- retrieve and manage the `MemoryPool` instances by shared_ptr, with a weak registry entry
- use a hastable for the allocations, keyed by the allocated memory address
- ability to verify a hash-checksum
- ability to watch number of allocations and allotted bytes
- using either a common global pool or a separate dedicated pool
- log all operations into a common `EventLog` instance
- front-end adaptors for use as C++ custom allocator
...these features are now used quite regularly,
and so a dedicated documentation test seems indicated.
Actually my intention is to add a tracking allocator to these test helpers
(and then to use that to verify the custom allocator usage of `lib::Several`)
Phew... this was a tough one — and not sure yet if this even remotely works...
Anyway, the `lib::SeveralBuilder` is already prepared for collaboration with a
custom allocator, since it delegates all memory handling through a base policy,
which in turn relies on std::allocator_traits.
The challenge however is to find a way...
* to make this clear and easy to use
* to expose an extension point for specific tweaks
* and to make all this work without excessive header cross dependencies
This is a low-level interface to allow changing the size of
the currently latest allocation in `AllocationCluster`; a client
aware of this capability can perform a real »in-place re-alloc«,
assuming the very specific usage constraints can be met.
`lib::Several<X>` will use this feature when attached to an
`AllocationCluster`; with this special setup, an previously
unknown number of non-copyable objects can be built without
wasting any storage, as long as the storage reserve in the
current extent of the `AllocationCluster` is sufficient.
...use some pointer arithmetic for this test to verify
some important cases of object placement empirically.
Note: there is possibly a very special problematic case
when ''over aligned objects'' are not placed in accordance
to their alignment requirements. Fixing this problem would
be non-trivial, and thus I have only left a note in #1204
...including the interesting cases where objects are relocated
and the element spread is changed. With the help of the checksum
feature built into the test-dummy objects, the properly balanced
invocation of constructors can be demonstrated
PS: for historical context...
Last week the "Big F**cking Rocket" successfully performed the
test flight 4; both booster and Starship made it back to the
water surface and performed a soft splash-down after decelerating
to speed zero. The Starship was even able to maintain control
in spite of quite some heat damage on the steering flaps.
Yes ... all techies around the world are thrilled...
- spread change now retains the nominal element reserve
- `capacity()` and `capReserve()` now exposed on the builder API
- factor out the handling check safety functions
- rewrite the `resize()` builder function to be more generic
__Test now covers__ example with trivial data type, which can
indeed be resized and allows to grow buffer on-the fly without
requiring any knowledge of the actual type (due to using `memmove`)
building on the preceding analysis, we can now demonstrate that
the container is initially able to grow, but looses this capability
after accepting one element of unknown subclass type...
`lib::Several` is designed to be highly adaptable, allowing for
several quite distinct usage styles. On the downside, this requires
to perform some checks at runtime only, since the ability to handle
some element depends on specific circumstances.
This is a notable difference to `std::vector`, which is simply not capable
of handling ''non-copyable'' types, even if given an up-front memory reservation.
The last test case provided with the previous changeset did not trigger
an exception, but closer investigation revealed that this is correct,
since in this specific situation the container can accept this object type,
thereby just loosing the ability to move-relocate further objects.
A slightly re-arranged test scenario can be used to demonstrate this fine point.
- the test-dummy objects need a `noexcept` move ctor
- **bug** here: need an explicit check to prevent other types
than the known element type from ''sneaking in''
The `SeveralBuilder` is very flexible with respect to added elements,
but it will investigate the provided type information and reject any
further build operation that can not be carried out safely.
...turns out that we must ensure to pass a plain "object" type
to the standard allocator framework (no const, no references).
Here, ''object in C++ terminology'' means a scalar or record type,
but no functor, no references and no void,
Consider what (not) to support.
Notably I decided ''not to support'' moving out of an iterator,
since doing so would contradict the fundamental assumptions of
the »Lumiera Forward Iterator« Concept.
Start verifying some variations of element placement,
still focussing on the simple cases
Parts of the decision logic for element handling was packaged
as separate »strategy« class — but this turned out to be neither
a real abstraction, nor configurable in any way. Thus it is better
to simplify the structure and turn these type predicates into simple
private member functions of the SeveralBuilder itself
Elements maintained within the storage should be placed such
as to comply with their alignment requirements; the element spacing
thus must be increased to be a multiple of the given type's alignment.
This solution works in most common cases, where the alignement is
not larger as the platform's bus width (typically 64bit); but for
''over-aligned types'' this scheme may still generate wrong object
start positions (a completely correct solution would require to
add a fixed offset to the beginning of the storage array and also
to capture the alignment requirements during population and to
re-check for each new type.
...and the nice thing is, the recently built `IterIndex` iteration wrapper
covers this functionality right away, simply because `lib::Several`
is a generic container with subscript operator.
...passes the simplest unit test
* create a Several<int>
* populate from `std::initializer_list`
* random-access to elements
''next step would be to implement iteration''
After some fruitless attempts, I settled for using std::function directly,
in order to establish a working baseline of this (tremendously complicated)
allocation logic. Storing a std::function in the ArrayBucket is certainly
wasteful (it costs 4 »slots« of memory), but has the upside that
it handles all those tricky corner cases magically; notably
the functor can be stored completely inline in the most relevant
case where the allocator is a monostate; moreover we bind a lambda,
which can be optimised very effectively, so that in the simplest case
there will be only the single indirection through the ''invoker''.
This **completes the code path for a simple usage cycle**
🠲 ''and hooray ... the test crashes with a double-free''
- ensure the ''deleter function'' is invoked
- care for proper ''deleter'' setup in case of exception while copying
- need to »lock-in« on one specific kind of ''destructor invocation scheme,''
since we do not keep track of individual concrete element types
Parts of this logic were first coded down in the `realloc` template method,
where it did not really belong; thus reintegrate similar logic one level above,
in the SeveralBuilder::adjustStorage(). Moreover, for performance reasons,
always start with an initial chunk, similar to what `std::vector` does...
since this is meant as a policy implementation, reduce it to the bare operation;
the actual container storage handling logic shall be implemented in the container
and based on those primitive and configurable base operations
...still fighting to get the design of the `AllocationPolicy`
settled to work well with `AllocationCluster` while also allowing
to handle data types which are (not) trivially copyable.
This changeset attempts to turn the logic round: now we capture
an ''move exclusion flag'' and otherwise allow the Policy to
decide on its own, based on the ''element type''
- verifies if new element can just fit in
- otherwise ensure the storage adjustments are basically possible
- throw exception in case the new element can not be accommodated
- else request possible storage adjustments
- and finally let the allocator place the new element
Draft skeleton of the logic for element creation.
This turns out to be a rather challenging piece of code,
since we have to rely on logical reasoning about properties
of the element types in order to decide if and how these
elements can be emplaced, including the possibility to
re-allocate and move existing data to a new location.
- if we know the exact element type, we can handle any
copyable or movable object
- however, if the container is filled with a mixture of types,
we can not re-allocate or grow dynamically, unless all data
is trivially copyable (and can thus be handled through memmove)
- moreover we must ensure the ability to invoke the proper destructor
In-depth analysis of storage management revealed a misconception
with respect to possible storage optimisations, requiring more
metadata fields to handle all corner cases correctly.
It seems prudent to avoid any but the most obvious optimisations
and wait for real-world usage for a better understanding of the
prevalent access patterns. However, in preparation for any future
optimisations, all access coordination and storage metadata is
now relocated into the `ArrayBucket`, and thus resides within the
managed allocation, allowing for localised layout optimisations.
To place this into context: the expected prevalent use case is
for the »Render Nodes Network«, which relies on `AllocationCluster`
for storage management; most nodes will have only a single predecessor
or successor, leading to a large number of lib::Several intsances
populated with a single data element. In such a scenario, it is
indeed rather wasteful to allocate four »slot« of metadata for
each container instance; even more so since most of this
metadata is not even required in such a scenario.
...which basically ''seems doable'' now, yet turns up several unsolved problems
- need a way to handle excess storage for the raw allocation
- generally should relocate all metadata into the ArrayBucket
- mismatch at various APIs; must re-think where to pass size explicitly
- unclear yet how and where to pass the actual element type to create
...turns out to be rather challenging, due to the far reaching requirements
* the default case (heap allocation) ''must work out-of-the box''
* optionally a C++ standard conformant `Allocator` can be adapted
* which works correct even in case this allocator is ''not a monostate''
* **essential requirement** is to pass an `AllocationCluster` reference directly
* need a ''generic extension point'' to adapt to similar elaborate custom schemes
__Note__: especially we want to create a direct collaboration between the allocation policy and the underlying allocator to allow support for a dedicate ''realloc operation''
- code spelled out as intended, according to generic scheme
- can now encode the »unmanaged« case directly as `null`-deleter,
because in all other cases a deleter function is mandatory now
- add default constructor to `ArrayBucket`, detailing the default spread
even while at first sight only a ''deleter instance'' is required,
it seems prudent to rearrange the code in accordance to the prospective
Allocator / Object Factory concept, and rather try to incorporate
the specifics of the memory layout into this generic view, thereby
abstracting the actual allocator away.
This can be achieved by using a standard-allocator for `std::byte`
as the base allocator and treat each individual element allocator
as a specialised cross-allocator (assuming that this cross adaptation
is actually trivial in almost all cases)
The fundamental decision is that we want to have a single generic front-end,
meaning that we must jump dynamically into a configured deleter function.
And on top of that comes the additional requirement that ''some allocators''
are in fact tied to a specific instance, while other allocators are monostate.
However, we can distinguish both by probing if the allocator can be default constructed,
and if a default constructed allocator is equivalent to the currently used alloctor instance.
If this test fails, we must indeed maintain a single allocator instance,
and (to avoid overengineering for this rather special use case) we will
place this allocator instance into heap memory then, with a self-cleanup mechanism
On the other hand, all monostate allocators can be handled through static trapolines.
- the basic decision is to implement ''realloc'' similar to `std::vector`
- however the situation is complicated by the desire to allow arbitrary element types
- ⟹ must build a strategy based on the properties of the target type
- the completely dynamic growth is only possibly for trivially-movable types
- can introduce a dedicated ''element type'' though, and store a trampolin handler
- create by forwarding allocator arguments to policy
- builder-Op to append from iterator
- decide to collapse the ArrayBucket class, since
access is going through unsafe pointer arithmetic anyway
- favour dynamic polymorphism
- use additional memory for management data alongside the element allocation
- encode a flag and a deleter pointer to enable ownership of the allocation
- inherit base container privately into builder, so the build ends with a slice
Some decisions
- use a single template with policy base
- population via separate builder class
- implemented similar to vector (start/end)
- but able to hold larger (subclass) objects
- basically works out-of-the-box now
- the hard wired fixed Extent size is a serious limitation
- however, this is not the intended primary use, rather complementary
...this is an important detail: quite commonly, a custom allocator
is actually implemented as monostate, to avoid bloating every client container
with a backlink pointer; by inheriting the `StdFactory` adapter from the
allocator, the empty-base optimisation can be exploited.
In the standard case thus LinkedElements is the same size as a single
pointer, which is already exploited at several places in the code base.
Notably `AllocationCluster` uses a »virtual overlay« to dress-up the
position pointer as `LinkedElements`, allowing to delegate most of the
administration and memory management to existing and verified code.
With this adjustments, `LinkedElements` pass the tests again
and the rework of `AllocationCluster` is considered complete.
This is the first validation of the new design:
the policy to take ownership can be reimplemented simply
by delegating to the adaptor for a C++ standard allocator
The following structure can be expected, after __switching to C++20__
* Concept **Allocator** deals with the bare memory allocation
* Concept **Factory** handles object creation and disposal by delegation
* Concept **Handle** is a ready-made functor for dependency-injection
Right now, an implementation of the ''prospective Factory Concept''
can be provided, by delegating through `std::allocator_traits` to a given
`std::allocator` or compatible object
By default, LinkedElements uses a policy OwningHeapAllocated;
while retaining this interface, this policy should be recast
to rely on a standard compliant allocator, with a default
fallback to `std::allocator<T>`
This way, a single policy would serve all the cases where
objects are actually owned and managed by `LinkedElements`,
and most special policies would be redundant.
This turns out to be quite tedious and technical however,
since the newer standard mandates to use std::allocator_traits
as front-end, and moreover the standard allocators are always
tied to one specific target type, while `LinkedElements` is
deliberately used to maintain a polymorphic sequence.
...what I've implemented yesterday is effectively the same functionality
as provided automatically by the C++ object system when using a virtual destructor.
Thus a much cleaner solution is to turn `Destructor` into a interface
and let C++ do all the hard work.
Verified in test: works as intended
This is the first draft, implementing the invocation explicitly
through a trampoline function. While it seems to work,
the formulation can probably be simplified....
These diagnostics helpers must rely on low-level trickery,
since the implementation strives at avoiding unnecessary storage overhead.
Since `AllocationCluster` is move-only (for good reasons) and `StorageManager`
can not be constructed independently, a »backdoor« is created by
forced cast, relying on the known memory layout
- rather accept hard-wired limits than making the implementation excessively generic
- by exploiting the layout, the administrative overhead can be reduced significantly
- the trick with the "virtual managment overlay" allows to hand-off most of the
clean-up work to C++ destructor invocation
- it is important to verify these low-level arrangements explicitly by unit-test
* this is pure old-style low-level trickery
* using a layout trick, the `AllocationCluster`
can be operated with the bare minimum of overhead
* this trick relies on the memory layout of `lib::LinkedElements`
...due to the decision to use a much simpler allocation scheme
to increase probability for actual savings, after switching the API
and removing all trading related aspects, a lot of further code is obsoleted
Notably this raises the difficult question,
whether to ensure **invocation of destructors**.
Not invoking dtors ''breaks one of the most fundamental contracts''
of the C++ language — yet the infrastructure to invoke dtors in such
a heterogeneous cluster of allocations creates a hugely significant
overhead and is bound to poison the caches (objects to be deallocated
typically sit in cold memory pages).
What makes this decision especially daunting is the fact that the
low-level-Model can be expected to be one of the largest systemic
data structures (letting aside the media buffers).
I am leaning towards a compromise: turn down this decision
towards the user of the `AllocationCluster`
After some analysis, it became clear that the existing code for
`AllocationCluster` (while in itself valid) will likely miss the point
for the expected usage in the low-level Model: most segments of the
model will be rather small, and thus there is not enough potential for
amortisation when using such a per-type and per-segment scheme;
a rather simplistic linear allocator will be sufficient.
On the other hand, with the current C++ standard it is easy to provide
a complient allocator implementation for STL containers, and thus the
interface should be retro-fitted accordingly.
At the time of the initial design attempts, I naively created a
classic interface to describe an fixed container allocated ''elsewhere.''
Meanwhile the C++ language has evolved and this whole idea looks
much more as if it could be a ''Concept'' (C++20). Moreover, having
several implementations of such a container interface is deemed inadequate,
since it would necessitate ''at least two indirections'' — while
going the Concept + Template route would allow to work without any
indirection, given our current understanding that the `ProcNode` itself
is ''not an interface'' — rather a building block.
- the starting point is the idea to build a dedicated ''turnout system''
- `StateAdapter`, `BuffTable` ⟶ `FeedManifold` and _Invocation_ will be fused
- actually, the `TurnoutSystem` will be ''pulled'' and orchestrate the invocation
- the structure is assumed to be recursive
The essence of the Node-Invocation, as developed 2009 / 2011 remains intact,
yet it will be organised along a clearer structure
Within the existing body of code, there are two unfinished attempts
towards building a node invocation and management of data buffers.
The first attempt was entirely driven from the angle of invoking a
processing function, while the second one draws from a wider scope
and can be considered the solution to build upon regarding data buffers
in general. However, the results of the first approach are well suited
for their specific purpose, so both solutions will be combined.
Thus the arrangement of data feeds going in and out of the render node
shall be renamed into `BuffTable` -> `FeedManifold`
...which seems to be basically fine thus far
...beyond some renaming and rearranging
''it turns out that the final, crucial links,
necessary to tie all together, are yet to be developed''
Facing quite some difficulties here, since there are (at least)
two abandoned past efforts towards building a render node network
in the code base; the structure and architecture decisions from these
previous attempts seem largely valid still, yet on a technical level,
the style of construction evolved considerably in the meantime. Moreover,
these old fragments of code, written during the early stages of the
project, were lacking clear goals and anchor points at places;
the situation is quite different now in this respect.
Sticking to well proven practice, the rework will be driven by a test setup,
and will progress over three steps with increasing levels of integration.
The initial effort of building a Scheduler can now be **considered complete**
Reaching this milestone required considerable time and effort, including
an extended series of tests to weld out obvious design and implementation flaws.
While the assessment of the new Scheduler's limitation and traits is ''far from complete,''
some basic achievements could be confirmed through this extended testing effort:
* the Scheduler is able to follow a given schedule effectively,
until close up to the load limit
* the ''stochastic load management'' causes some latency on isolated events,
in the order of magnitude < 5ms
* the Scheduler is susceptible to degradation through Contention
* as mitigation, the Scheduler prefers to reduce capacity in such a situation
* operating the Scheduler effectively thus requires a minimum job size of 2ms
* the ability for sustained operation under full nominal load has been confirmed
by performing **test sequences with over 80 seconds**
* beyond the mentioned latency (<5ms) and a typical turnaround of 100µs per job
(for debug builds), **no further significant overhead** was found.
Design, Implementation and Testing were documented extensively in the [https://lumiera.org/wiki/renderengine.html#Scheduler%20SchedulerProcessing%20SchedulerTest%20SchedulerWorker%20SchedulerMemory%20RenderActivity%20JobPlanningPipeline%20PlayProcess%20Rendering »TiddlyWiki« #Scheduler]
This test completes the stress-testing effort
and summarises the findings
* Scheduler performs within relevant parameter range without significant overhead
* Scheduler can operate with full load in stable state, with 100% correct result
The behaviour seems consistent and the schedule breaks at the expected point.
At first sight, concurrency seems slightly to low; detailed investigation
however shows that this is due to the structure of the load graph,
and in fact the run time comes close to optimal values.
the `BreakingPoint` tool conducts a binary search to find the ''stress factor''
where a given schedule breaks. There are some known deviations related to the
measurement setup, which unfortunately impact the interpretation of the
''stress factor'' scale. Earlier, an attempt was made, to watch those factors
empirically and work a ''form factor'' into the ''effective stress factor''
used to guide this measurement method.
Closer investigation with extended and elastic load patters now revealed
a strong tendency of the Scheduler to scale down the work resources when not
fully loaded. This may be mistaken by the above mentioned adjustments as a sign
of a structural limiation of the possible concurrency.
Thus, as a mitigation, those adjustments are now only performed at the
beginning of the measurement series, and also only when the stress factor
is high (implying that the scheduler is actually overloaded and thus has
no incentive for scaling down).
These observations indicate that the »Breaking Point« search must be taken
with a grain of salt: Especially when the test load does ''not'' contain
a high degree of inter dependencies, it will be ''stretched elastically''
rather than outright broken. And under such circumstances, this measurement
actually gauges the Scheduler's ability to comply to an established
load and computation goal.
...well — more of a logical contradiction, not so much a bug.
The underlying problematic situation arises when meanwhile the
Extent storage has been expanded, and especially the active slots
are in »wrapped state«. In this case, the newly allocated extents
must be rotated in, which invalidates existing index numbers.
This problem was amended by exploting a chaching mechanism, allowing
to re-attach and validate an index position still stored in an old
iterator; especially this can happen when attempting to attach a
follow-up dependency onto a job planned earlier, but not yet scheduled.
The problem here was an assertion failure, which was triggered with a
high probability; the fix for the problem detailed above used the yield()
function, while it actually was only interested in retrieving the
Extent's address to probe if the extent matches an known storage location.
The solution is to provide a dedicated function for this check, which
can then skip the sanity check (because in this case we do not want
to use the Extent, and thus can touch obsoleted/inactive Extents
without problem)
...this seems to be the last topic for this investigation of Scheduler behaviour;
the goal is to demonstrate readiness for stable-state operation over an extended period of time
- use parameters known to produce a clean linear model
- assert on properties of this linear model
Add extended documentation into the !TiddlyWiki,
with a textual account of the various findings,
also including some of the images and diagrams,
rendered as SVG
In the end, I decided that it ''is to early to decide anything'' in this respect...
The actual situation encountered is a **Catch-22**:
* in its current form, the »Tick« handler detects compulsory jobs beyond deadline
* since such a Job ''must not be touched anymore,'' there is no way scheduling can proceed
* so this would constitute a ''Scheduler Emergency''
All fine — just the »Tick« handler ''itself is a compulsory job'' — and being a job, it can well be driven beyond its deadline. In fact this situation was encountered as part of stress testing.
Several mitigations or real solutions are conceivable, but in the end,
too little is known yet regarding the integration of the scheduler within the Engine
Thus I'll marked the problematic location and opened #1362
Investigate the behaviour over a wider range of job loads,
job count and worker pool sizes. Seemingly the processing
can not fully utilise the available worker pool capacity.
By inspection of trace-dumps, one impeding mechanism could
be identified: the »stickiness« of the contention mitigation.
Whenever a worker encounters repeated contention, it steps up
and adds more and more wait cycles to remove pressure from the
schedule coordination. As such this is fine and prevents further
degradation of performance by repeated atomic synchronisation.
However, this throttling was kept up needlessly after further
successful work-pulls. Since job times of several milliseconds
can be expected on average in media processing, such a long
retention would spread a performance degradation over a duration
of several frames. Thus, the scheme for step-down was changed
to decrease the throttling by a power series rather than just
documenting the level.
Use the statistic functions imported recently from Yoshimi-test
to compute a linear regression model as immediate test result.
Combining several measurement series, this allows to draw conclusions
about some generic traits and limitations of the scheduler.
Visual tweaks specific to this measurement setup
* include a numeric representation of the regression line
* include descriptive axis labels
* improve the key names to clarify their meaning
* heuristic code for the x-ticks
Package these customisations as a helper function into the measurement tool
After a lot of further tinkering, seemingly arriving at a
somewhat satisfactory solution for the layout and arrangement of
test definitions and especially the table for measurement series.
While the complete setup remains fragile indeed, and complexity is more
hidden than reduced — the pragmatic compromise established yesterday
at least allows to reduce the amount of boilerplate in the test or
measurement setup to make the actual specifics stand out clearly.
----
As an aside, the usage of the `DataFile` type imported from Yoshimi-test
recently was re-shaped more towards a generic handling of tabular data with
CSV storage option; thus renaming the type now into `DataTable`.
Persistent storage is now just one option, while another usage pattern
compounds observation data into table rows, which are then directly
rendered into a CSV string, e.g. for visualisation as Gnuplot graph.
Encountering ''just some design problems related to the test setup,''
which however turn out hard to overcome. Seems that, in my eagerness
to create a succinct and clear presentation of the test, I went into
danger territory, overstretching the abilities of the C++ language.
After working with a set of tools created step by step over an extended span of time,
''for me'' the machinations of this setup seem to be reduced to flipping a toggle
here and there, and I want to focus these active parts while laying out this test.
''This would require'' to create a system of nested scopes, while getting more and more
specific gradually, and moving to the individual case at question; notably any
clarification and definition within those inner focused contexts would have to be
picked up and linked in dynamically.
Yet the C++ language only allows to be ''either'' open and flexible towards
the actual types, or ''alternatively'' to select dynamically within a fixed
set of (virtual) methods, which then must be determined from the beginning.
It is not possible to tweak and adjust base definitions after the fact,
and it is not possible to fill in constant definitions dynamically
with late binding to some specific implementation type provided only
at current scope.
Seems that I am running against that brick wall over and over again,
piling up complexities driven by an desire for succinctness and clarity.
Now attempting to resolve this quite frustrating situation...
- fix the actual type of the TestChainLoad by a typedef in test context
- avoid the definitions (and thus the danger of shadowing)
and use one `testSetup()` method to place all local adjustments.
With the addition of a second tool `bench::ParameterRange`,
the setup of the test-context for measurement became confusing,
since the original scheme was mostly oriented towards the
''breaking point search.''
On close investigation, I discovered several redundancies, and
moreover, it seems questionable to generate an ''adapted-schedule''
for the Parameter-Range measurement method, which aims at overloading
the scheduler and watch the time to resolve such a load peak.
The solution entertained here is to move most of the schedule-ctx setup
into the base implementation, which is typically just inherited by the
actual testcase setup. This allows to leave the decision whether to build
an adapted schedule to the actual tool. So `bench::BreakingPoint` can
always setup the adapted schedule with a specific stress-factor,
while `bench::ParameterRange` by default does nothing in this
respect, and thus the `ScheduleCtx` will provide a default schedule
with the configured level-duration (and the default for this is
lowered to 200µs here).
In a similar vein, calculation of result data points from the raw measurement
is moved over into the actual test setup, thereby gaining flexibility.
Rework the existing tool to capture the measurement series
into the newly integrated CSV-based data storage, allowing
to turn the results into a Gnuplot-visualisation.
...which is added automatically whenever additional data columns are present
Result can only be verified visually
* the upper diagram should show the first fibonacci points
* a (correct) linear regression line should be overlayed in red
* below, a secondary diagram should appear, with aligned axis
* the row "one" in this diagram should be shown as impulses
* the further rows "two" and "three" should be drawn as
green points, using the secondary Y-axis (values 100-250)
* Gnuplot can handle missing data points
The idea is to build the Layout-branching into the generated Gnuplot script,
based on the number of data columns detected. If there is at least one further
data column, then the "mulitplot" layout will be used to feature this
additional data in a secondary diagram below with aligned axis;
if more than one additional data column is present, all further
visualisation will draw points, using the secondary Y-axis
Moreover, Gnuplot can calculate the linear regresssion line itself,
and the drawing will then be done using an `arrow` command,
defining a function regLine(x) based on the linear model.
- `forElse` belongs to the metaprogramming utils
- have a CSVLine, which is a string with custom appending mechanism
- this in turn allows CSVData to accept arbitrary sized tuples,
by rendering them into CSVLine
The intention is to create a library of convenient building blocks;
providing a visualisation should be as simple as invoking a free function
with CSV data, yet with the ability to tweak some lables or display
variations if desired.
This can be achieved by..
* having a series of ready-made standard visualisations
* expose a function call for each, accepting a data-context builder
* provide secondary convenience shortcuts, which add some of the expected bindings
* notably a shortcut is provided to take the data as CSV-string
* augmented by a wrapper/builder to allow defining data points inline
Deliberately keep it unstructured and add dedicated functions
for each new emerging use case; hopefully some commen usage scheme
will emerge over time.
* Data is to be handed in as an iterator over CSV-strings.
* will have to find out about additional parametrisation on a case-by-case base
The default visuals of gnuplot are simple,
yet tend to look cluttered and are not well suited for our purpose
We need the following presentation
* a scatter diagram with a regression line
* additionally a secondary diagram stacked below, with aligned axis
Thus 🠲 R-T-F-M
* The [http://gnuplot.info/ Gnuplot docu] is exhaustive, yet hard to get into
* Helpful was this collection of [http://gnuplotting.org/ example solutions for scientific plots]
* and — Stackoverflow...
A minimalist `TextTemplate` engine is available for in-project use.
* supports only the bare minimum of features (no programming language)
* substitution of `${placeholder}` by key-name data access
* conditional section `${if key}...${end if}`
* iteration over a data sequence
* other then most solutions available as library,
this implementation does **not require** a specific data type,
nor does it invent a dynamic object system or JSON backend;
rather, a generic ''Data Source Adapter'' is used, which can
be specialised to access any kind of ''structured data''
* the following `DataSource` specialisations are provided
* `std::map<string,string>`
* Lumiera »External Tree Description« (based on `GenNode`)
* a string-based spec for testing
...turns out challenging, since our intention here
is borderline to the intended design of the Lumiera ETD.
It ''should work'' though, when combined with a Variant-visitor...
Document existing data binding logic and investigate in detail
what must be done to enable a similar binding backed by Lumiera's ETD structures.
This analysis highlights some tricky aspects, which can be accommodated by
slight adjustments and generalisations in the `TextTemplate` implementation
* `GenNode` is not structured string data, rather binary data
* thus exposing a std::string_view is not adequate, requiring to
pick up the result type from the actual data binding
* moreover, to allow for arbitrary nested scopes, a back-pointer
to the parent scope must be maintained, which requires stable memory locations.
This can best be solved within the InstanceCore itself, which manages
the actual hierarchy of data source references.
* the existing code happens already to fulfil this requirement, but
for sake of clarity, handling of such a nested scope is now extracted
into a dedicated operation, to highlight the guaranteed memory layout.
We use a DataSrc<DAT> template to access the actual data to be substituted.
However, when applying the Text-Template, we need to pick the right
specialisation, based on the type of the actual data provided.
Here we face several challenges:
* Class-Template-Argument-Deduction starts from the *primary* template's constructors.
Without that, the compiler will only try the copy constructor and will
never see the constructors of partial specialisations.
This can be fixed by providing a ''dummy constructor''.
* The specifics of how to provide a custom CTAD deduction guide
for a **nested template** are not well documented. I have found
several bug reports, and seemingly one of these bugs failed my
my various attempts. Moreover it is ''not clear if such a deduction
guide can even be given outside of the class definition scope.''
For the intended usage pattern this would be crucial, since users
are expected to provide further specialisations of the DataSrc-template
* Thus I resorted to the ''old school solution,'' which is to use
a ''free builder function'' as an extension point. Thus users could
provide further overloads for the `buildDataSrc()` function.
* Unfortunately, SFINAE-Tricks are way more limited for function overload.
Thus it seems impossible to have a generic and more specialised cases,
unless all special cases are disjoint.
Thus the solution is far from perfect, ''yet for the current situation it seems
sufficient'' (and C++20 Concepts will greatly help to resolve this kind of problems)
...implemented by simply parsing the string into key=value pairs,
which are then stored into a shared map. The actual data binding
implementation can thus be inherited from the existing Map-binding
While they were detected just fine, thy were passed-through
unaltered, which subverts the purpose of such an escape,
which is to allow for the tag syntax to be present in the
processed, substituted document (e.g. when generating a
shell script)
thus `\${escaped}` becomes `${escaped}`
...using a ''special protocol'' to represent iterative data sequences
* use an Index-Key with a CSV list of element prefixes
* synthesise key-prefixes for each data element
* perform lookup with the decorated key first
This allows to somehow ''emulate'' nested associations within a single, flat Map.
Obviously this is more like a proof-of-concept; actually the Map-databinding
is meant to handle the simple cases, where just placeholders are to be substituted.
The logic structures are much more relevant when binding to structural data,
most notably to the Lumiera _External Tree Description_ format, which is
used for model data and inter-layer communication.
- the basic interpretation of Action-tokens is already in place
- add the interpretation of conditional and looping constructs
- this includes helpers for
* reset to another Action-token index
* recursive interpretation of the next token
* handling of nested loop evaluation context
In order to make this implementation compile, also the skeleton
of the Map-string-string data binding must be completed, including
a draft how to handle nested keys in a simple map
playing the »fence post problem« the other way round
and abandoning the ''pull processing'' in favour of direct manipulation
leads to much clearer formulation of the code-generation logic
...turns out the ''pipeline design'' is not a good fit for the
Action compilation, since the compiler needs to refer to previous Actions;
better to let the compiler ''build'' the `ActionSeq`
...implemented as »custom processing layer« within a
demand-driven parsing pipeline, with the ability to
inject additional Action-tokens to represent the intermittent
constant text between tags; special handling to expose one
constant postfix after the last active tag.
The way I've written this helper template, as a byproduct
it is also possible to maintain the back-refrence to the container
through a smart-ptr. In this case, the iterator-handle also manages
the ownership automatically.
...mostly we want the usual convenient handling pattern for iterators,
but with the proviso actually to perform an access by subscript,
and the ability to re-set to another current index
* establish the feature set to provide
* choose scheme for runtime representation
* break down analysis to individual parsing and execution steps
* conclude which actions to conduct and the necessary data
* derive the abstract binding API required
Conducted an extended investigation regarding text templating
and the library solutions available and still maintained today.
The conclusion is
* there are some mature and widely used solutions available for C++
* all of these are considered a mismatch for the task at hand,
which is to generate Gnuplot scripts for test data visualisation
Points of contention
* all solutions offer a massive feature set, oriented towards web content generation
* all solutions provide their own structured data type or custom property-tree framework
**Decision** 🠲 better to write a minimalistic templating engine from scratch rather
Read the documentation and find out how to generate the kind of diagram
necessary for visualisation of Scheduler-Stress-Test observations.
I used to have basic Gnuplot knowledge, and thus had to find out about
- reading CSV
- supported diagram types
- layering and styling
Conclusion: will use Gnuplot and generate a script from Test code
showDecimal -> decimal10 (maximal precision to survive round-trip through decimal representation=
showComplete -> max_decimal10 (enough decimal places to capture each possible distinct floating-point value)
Use these new functions to rewrite the format4csv() helper
verify also that clean-up happens in case of exceptions thrown;
as an aside, add Macro to check for ''any'' exception and match
on something in the message (as opposed to just a Lumiera Exception)
...using the same method for sake of uniformity
Also move the permissions helpers to the file.hpp support functions
and setup a separate unit test for these
Inspired by https://stackoverflow.com/a/58454949
Verified behaviour of fs::create_directory
--> it returns true only if it ''indeed could create'' a new directory
--> it returns false if the directory exists already
--> it throws when some other obstacle shows up
As an aside: the Header include/limits.h could be cleaned up,
and it is used solely from C++ code, thus could be typed, namespaced etc.
Since this is a much more complicated topic,
for now I decided to establish two instances through global variables:
* a sequence seeded with a fixed starting value
* another sequence seeded from a true entropy source
What we actually need however is some kind of execution framework
to define points of random-seeding and to capture seed values for
reproducible tests.
Relying on random numbers for verification and measurements is known to be problematic.
At some point we are bound to control the seed values -- and in the actual
application usage we want to record sequence seeding in the event log.
Some initial thoughts regarding this intricate topic.
* a low-ceremony drop-in replacement for rand() is required
* we want the ability to pick-up and control each and every usage eventually
* however, some usages explicitly require true randomness
* the ability to use separate streams of random-number generation is desirable
Yesterday I decided to include some facilities I have written in 2022
for the Yoshimi-Testsuite. The intention is to use these as-is, and just
to adapt them stylistically to the Lumiera code base.
However — at least some basic documentation in the form of
very basic unit-tests can be considered »acceptance criteria«
- reformat in Lumieara-GNU style
- use the Lumiera exceptions
- use Lumiera format-string frontend
- use lib/util
NOTE: I am the original author of the code introduced here,
and thus I can re-license it under GPL 2+
Initially the model was that of a single graph starting
with one seed node and joining all chains into a single exit node.
This however is not well suited to simulate realistic calculations,
and thus the ability for injecting additional seeds and to randomly
sever some chains was added -- which overthrows the assumption of
a single exit node at the end, where the final hash can be retrieved.
The topology generation used to pick up all open ends, in order to
join them explicitly into a reserved last node; in the light of the
above changes, this seems like an superfluous complexity, and adds
a lot of redundant checks to the code, since the main body of the
algorithm, in its current form, already does all the necessary
bound checks. It suffices thus to just terminate the processing
when the complete node space is visited and wired.
Unfortunately this requires to fix basically all node hashes
and a lot of the statistics values of the test; yet overall
the generated graphs are much more logical; so this change
is deemed worth the effort.
Allow easily to generate a Chain-Load with all nodes unconnected,
yet each node on a separate level.
Fix a deficiency in the graph generation, which caused spurious
connections to be added at the last node, since the prune rule
was not checked
...the previous setup produced a single linear chain
instead of a set of unconnected nodes.
With this, the behaviour is more like expected,
but concurrency is still too low
- better use a Test-Chain-Load without any dependencies
- schedule all at once
- employ instrumentation
- use the inner »overall time« as dependent result variable
The timing results now show an almost perfect linear dependency.
Also the inner overall time seems to omit the setup and tear-down time.
But other observed values (notably the avgConcurrency) do not line up
- fill the range randomly with probe points
- use the node count as independent parameter
- measurement method *works as intended*
- results indeed show a linear relationship
Results are ''interesting'' however, since the (par,time) points
seem to be arranged into two lines, implying that about half
of the runs were somehow ''degraded'' and performed way slower.
With the latest improvements, the »breaking point search« works as expected
and yields meaningful data; however — it seems to be well suited rather
for specific setups, which involve an extended graph with massive dependencies,
because only such a setup produces a clearly defined ''breaking point.''
Thus I'm considering to complement this research by another measurement setup
to establish a linear regression model of the Scheduler expense.
To allow integration of this different setup into the existing stress-test-rig,
some rearrangements of the builder notation are necessary; especially we need
to pass the type name of the actual tool, and it seems indicated to
reorder the source code to provide the config base class `StressRig`
at the top, followed by a long (and very technical) implementation
namespace.
It turns out to be not correct using all the divergence in concurrency
as a form factor, since it is quite common that not all cores can be active
at every level, given the structural constraints as dictated by the load graph.
On the other hand, if the empirical work (non wait-time) concurrency
systematically differs from the simple model used for establishing the schedule,
then this should indeed be considered a form factor and deduced from
the effective stress factor, since it is not a reserve available for speed-up
The solution entertained here is to derive an effective compounded sum
of weights from the calculation used to build the schedule. This compounded
weight sum is typically lower than the plain sum of all node weights, which
is precisely due to the theoretical amount of expense reduction assumed
in the schedule generation. So this gives us a handle at the theoretically
expected expense and through the plain weight sum, we may draw conclusion
about the effective concurrency expected in this schedule.
Taking only this part as base for the empirical deviations yields search results
very close to stressFactor ~1 -- implying that the test setup now
observes what was intended to observe...
In binary search, in order to establish the invariant initially,
a loop is necessary, since a single step might not be sufficient.
Moreover, the ongoing adjustments jeopardise detection of the
statistical breaking point condition, by causing a negative delta
due to gradually approaching the point of convergence -- leading
to an ongoing search in a region beyond the actual breaking point.
Various misconceptions identified in the feedback path of the test algorithm.
- statistics are cumulative, which must be incorporated by norming on time base
- average concurrency includes idle times, which is besides the point within this
test setup, since additional wait-phases are injected when reducing stress
Relying on the new instrumentation facility, the actually effective
concurrency and cumulative run time of the test jobs can be established.
These can now be cast into a form-factor to represent actual excess expenses
in relation to the theoretical model.
By allowing to adjust the adapted schedule by this form factor,
it can be made to reflect more closely the actual empiric load,
hopefully leading to a more realistic effect of the stress-factor
and thus results better suited to conclude on generic behaviour.
...turns out rather challenging to come up with any test case,
that is both meaningful, simple to setup and understand, yet still
produces somewhat stable values. `IncidenceCount` seems most valuable
for investigation and direct inspection of results
Various experiments to watch Scheduler behaviour under extended load.
Notably the example committed here makes the Scheduler run for 1.2 sec
and process 800 jobs with 10ms each, thereby putting the system into
100% load on all CPUs
- supplement the pre-dimensioning for data capture; without that,
sporadic memory corruption indeed happens (as expected, since
concurrent re-allocation of the vector with an entry for each
thread is not threadsafe, and this test shows much contention)
- add a top-level logging for better diagnostics of errors
emanating from the test run
Basically users are free to place the measurement calls to their liking.
This implies that bracketed measurement intervals can be defined overlapping
even within a single thread, thereby accounting the overlapping time interval
several times. However, for the time spent per thread, only actual thread
activity should be counted, disregarding overlaps. Thus introduce a
new aggregate, ''active time'', which is the sum of all thread times.
As an aside, do not need explicit randomness for the simple two-thread
test case — timings are random anyway...
+ bugfix for out-of-bounds access
...since we've established already an integration over the event timeline,
it is just one simple further step to determine the concurrency level
on each individual segment of the timeline. Based on this attribution
- the averaged concurrenty within the observation range can be computed as weighted mean
- moreover we can account for the precise cumulated time spent at each concurrency level
...using a simplistic allocation of next-slot based on initialisation
of a thread_local storage. This implies that this helper can not be
reset or reused, and that there can not be multiple or long-lived instances.
Keep-it-simple for now...
...to sort out the interpretation of measurement results,
the actual duration and concurrency of ComputationLoad invocations
should be recorded, allowing to draw conclusions regarding the
Scheduler's performance as opposed to further system and thread
management effects due to concurrent operation under pressure.
After an extended break due to "real life issues"....
Pick up the investigation, with the goal to ascertain a valid definition
and understanding of all test parameters. A first step is to establish
a baseline ''without using a computational load''; this might be some kind
of base overhead of the scheduler.
However -- the way the test scaffolding was built, it is difficult to
create a feedback loop for the statistical test setup with binary search,
since it is not really clear how the single control parameter of the test algorithm,
the so called "stress factor", shall be interpreted and how it can be
combined with a base load.
An extended series of tests, while watching the observed value patterns qualitatively,
seems to corroborate the former results, indicating that the base expense
in my test setup (using a debug build) is at ~200µs / Node / core.
Yet the difficulty to interpret this result and arrive at a logical and generic model
prevents me from translating this into a measurement scheme, which can
be executed independently from a specific test setup and hardware
The goal is to devise a load more akin to the expected real-world processing patterns,
and then to increase the density to establish a breaking point.
Preliminary investigations focus on establishing the properties of this load
and to ensure the actual computation load behaves as expected.
Using the third Graph pattern prepared thus far, which produces
short chains of length=2, yet immediately spread out to maximum concurrency.
This leads to 5.8 Nodes / Level on average.
...as it turned out, this segfault was caused by flaws in the ScheduleCtx
used for generate the test-schedule; especially when all node-spreads are set
to zero and thus all jobs are scheduled immediately at t=0, there was a loophole
in the logic to set the dependencies for the final »wake-up« job.
When running such a schedule in the Stress-Test-Bench, the next measurement run
could be started due to a premature wake-up job, thereby overrunning the previous
test-run, which could be still in the middle of computations.
So this was not a bug in the Scheduler itself, yet something to take care of
later when programming the actual Job-Planning and schedule generation.
Search processing pattern for massive parallel test.
The goal is to get all cores into active processing most of the time,
thus we need a graph with low dependency management overhead, which is
also consistently wide horizontally to have several jobs in working state
all of the time. The investigation aims at finding out about systematic
overheads in such a setup.
This is just another (obvious) degree of freedom, which could be
interesting to explore in stress testing, while probably not of much
relevance in practice (if a job is expected to become runable earlier,
in can as well be just scheduled earlier).
Some experimentation shows that the timing measurements exhibit more
fluctuations, but also slightly better times when pressure is low, which
is pretty much what I'd expect. When raising pressure, the average
times converge towards the same time range as observed with time bound
propagation.
Note that enabling this variation requires to wire a boolean switch
over various layers of abstraction; arguably this is an unnecessary
complexity and could be retracted once the »experimentation phase«
is over.
This completes the preparation of a Scheduler Stress-Test setup.
The `volatile` was used asymmetrically and there was concern that
this usage makes the `ComutationalLoad` dependent on concurrency.
However, an impact could not be confirmed empirically.
Moreover, to simplify this kind of tests, make the `schedDepends`
directly configurable in the Stress-Test-Rig.
...watching those dumps on the example Graph with excessive dependencies
made blatantly clear that we're dispatching a lot of unnecessary jobs,
since the actual continuation is /always/ triggered by the dependency-NOTIFY.
Before the rework of NOTIFY-Handling, this was rather obscured, but now,
since the NOTIFY trigger itself is also dispatched by the Scheduler,
it ''must be this job'' which actually continues the calculation, since
the main job ''can not pass the gate'' before the dependency notification
arrives.
Thus I've now added a variation to the test setup where all these duplicate
jobs are simply omitted. And, as expected, the computation runs faster
and with less signs of contention. Together with the other additional
parameter (the base expense) we might now actually be able to narrow down
on the observation of a ''expense socket'', which can then be
attributed to something like an ''inherent scheduler overhead''
...actually difficult to integrate into the existing scheme,
which is entirely level-based. Can only be added to the individual Jobs,
not to the planning and completion-jobs — which actually shouldn't be a problem,
since it is beneficial to dispatch the planning runs earlier
The next goal is to determine basic performance characteristics
of the Scheduler implementation written thus far;
to help with these investigations some added flexibility seems expedient
- the ability to define a per-job base expense
- added flexibility regarding the scheduling of dependencies
This changeset introduces configuration options
While the idea with capturing observation values is nice,
it definitively does not belong into a library impl of the
search algorithm, because this is usage specific and grossly
complicates the invocation.
Rather, observation data can be captured by side-effect
from the probe-λ holding the actual measurement run.
This statistical criterion defines when to count observed Scheduler performance
as loosing control. The test is comprised of three observations, which
all must be confirmed:
- an individual run counts as accidentally failed when the execution slips
away by more than 2ms with respect to the defined overall schedule.
When more than 55% of all observed runs are considered as failed,
the first condition is met
- moreover, the observed standard derivation must also surpass the
same limit of > 2ms, which indicates that the Scheduling mechanism
is under substantial strain (on average, the slip is ~ 200µs)
- the third condition is that the ''averaged delta'' has surpassed
4ms, which is 2 times the basic failure indicator.
These conditions are based on watching the Scheduler in operation;
typically all three conditions slip away by large margin after a
very narrow yet critical increase in the stress level.
Using three conditions together should improve robustness; often
the problems to keep up with the schedule build up over some parameter
range, yet the actual decision should be based on complete loss of control.
adapt the code written yesterday explicitly for the test case
into the new framework for performing a stress-test run.
Notable difference: times converted to millisecond immediately
Elaborate the draft to include all the elements used directly in the test case thus far;
the goal is to introduce some structuring and leave room for flexible confguration,
while implementing the actual binary search as library function over Lambdas.
My expectation is to write a series of individual test instances with varying parameters;
while it seems possible to add further performance test variations into that scheme later on.
- the goal is to run a binary search
- the search condition should be factored out
- thus some kind of framework or DSL is required,
to separate the technicalities of the measurement
from the specifics of the actual test case.
- repeated invocations of the same test setup for statistics
- the usual nasty 64-node graph with massive fork out
- limit concurrency to 4 cores
- tabulate data to look for clues regarding a trigger criteria
Hypothesis: The Scheduler slips off schedule when all of the
following three criteria are met:
- more than 55% glitches with Δ > 2ms
- σ > 2ms
- ∅Δ > 4ms
- schedule can now be adapted to concurrency and expected distribution of runtimes
- additional stress factor to press the schedule (1.0 is nominal speed)
- observed run-time now without Scheduler start-up and pre-roll
- document and verify computed numbers
...based on the adapted time-factor sequence
implemented yesterday in TestChainLoad itself
- in this case, the TimeBase from the computation load is used as level speed
- this »base beat« is then modulated by the timing factor sequence
- working in an additional stress factor to press the schedule uniformly
- actual start time will be added as offset once the actual test commences
...up to now, we've relied on a regular schedule governed solely
by the progression of node levels, with a fixed level speed
defaulting to 1ms per level.
But in preparation of stress tesging, we need a schedule adapted
to the expected distribution of computation times, otherwise
we'll not be able to factor out the actual computation graph
connectivity. The goal is to establish a distinctive
**breaking point** when the scheduler is unable to cope with
the provided schedule.
The helper developed thus far produces a sequence of
weight factors per level, which could then be multiplied
with an actual delay base time to produce a concrete schedule.
These calculations, while simple, are difficult to understand;
recommended to use the values tabulated in this test together
with a `graphviz` rendering of the node graph (🠲 `printTopologyDOT()`)
The intention is to establish a theoretical limit for the expense,
given some degree of concurrency. In reality, the expense should always
be greater, since the time is not just split by the number of cores;
rather we need to chain up existing jobs of various weight on the available
cores (which is a special case of the box packing problem).
With this formula, an ideal weight factor can be determined for each level,
and then summing up the sequence of levels gives us a guess for a sensible
timing for the overall scheduler
...so IterExplorer got yet another processing layer,
which uses the grouping mechanics developed yesterday,
but is freely configurable through λ-Functions.
At actual usage sit in TestChainLoad, now only the actual
aggregation computation must be supplied, and follow-up computations
can now be chained up easily as further transformation layers.
Yesterday I've written a simple loop-based implementation of
a grouping aggregation to count the node weights per level.
Unfortunately it turns out we'll use several flavours of this
and we'd have to chain up postprocessing -- thus from a usage perspective
it would be better to have the same functionality packaged as interator pipeline.
This turns out to be surprisingly tricky and there is no suitable library
function available, which means I'll have to write one myself.
This changeset is the first step into this direction: reformulate
the simple for-loop into a demand-driven grouping iterator
...the idea is to use the sum of node weights per level
to create a schedule, which more closely reflects the distribution
of actual computation time. Hopefully such a schedule can then be
squeezed or stretched by a time factor to find out a ''breaking point'',
at which the Scheduler is no longer able to keep up.
In-depth investigation and reasoning highlighted another problem,
which could lead to memory corruption in rare cases; in the end
I found a solution by caching the ''address'' of the current Epoch
and re-validating this address on each Epoch-overflow.
After some difficulties getting any reliable measurement for a Release-build,
it turned out that this solution even ''improves performance by 22%''
Remark-1: the static blockFlow::Config prevents simple measurements by
just recompiling one translation unit; it is necessary to build the
relevant parts of Vault-layer with optimisation to get reliable numbers
Remark-2: performing a full non-DEBUG build highlighted two missing
header-inclusions to allow for the necessary template specialisations.
...discovered by during investigation of latest Scheduler failures.
The root of the problems is that block overflow can potentially trigger
expansion of the allocation pool. Under some circumstances, this on-the fly
allocation requires a rotation of index slots, thereby invalidating
existing iterators.
While such behaviour is not uncommon with storage data structures (see std::vector),
in this case it turns out problematic because due to performance considerations,
a usage pattern emerged which exploits re-using existing storage »Slots« with known
deadline. This optimisation seems to have significant leverage on the
planning jobs, which happen to allocated and arrange a whole strike of
Activities with similar deadlines.
One of these problem situations can easily be fixed, since it is triggered
through the iterator itself, using a delegate function to request a storage expansion,
at which point the iterator is able to re-link and fix its internal index.
This solution also has no tangible performance implications in optimised code.
Unfortunately there remains one obscure corner case where such an pool expansion
could also have invalidated other iterators, which are then used later to
attach dependency relations; even a partial fix for that problem seems
to cause considerable performance cost of about -14% in optimised code.
- now there can not be any direct dispatch anymore when entering events
- thus there is no decision logic at entrance anymore
- rather the work-function implementation moved down into Layer-2
- so add a unit-test like coverage there (integration in SchedulerService_test)
This amounts to a rather massive refactoring, prompted by the enduring problems
observed when pressing the scheduler. All the various glitches and (fixed) crashes
are related to the way how planning-jobs enter the schedule items,
which is also closely tied to the difficulties getting the locking
for planning-jobs correct.
The solution pursued hereby is to reorder the main avenues into the
scheduler implementation. There is now a streamlined main entrance,
which **always** enqueues only, allowing to omit most checks and
coordination. On the other hand, the complete coordination and dispatch
of the work capacity is now shifted down into the SchedulerCommutator,
thereby linking all coordination and access control close together
into a single implementation facility.
If this works out as intended
- several repeated checks on the Grooming-Token could be omitted (performance)
- the planning-job would no longer be able to loose / drop the Token,
thereby running enforcedly single-threaded (as was the original intention)
- since all planning effectively originates from planning-jobs, this
would allow to omit many safety barriers and complexities at the
scheduler entrance avenue, since now all entries just go into the queue.
WIP: tests pass compiler, but must be adapted / reworked
...whenever the planning falls behind schedule, it can happen that
the planner-worker immediately dispatches its own jobs; while the calculation
is broken anyway in this situation, especially this call scheme leads to
dropping the Grooming-Token prior to the calculation dispatched directly.
Since the dependency relation can only be established after creating
both predecessor and successor schedules, the corresponding allocation
of the NOTIFY-Activity is not protected against concurrent access,
which probably leads to the assertion failure due to corruption of
the allocator's internal data structures...
- fix mistake in schdule time for planning chunks (must use start, not end of chunk)
- allow to configure the heuristics for pre-roll (time reserved for planning a node)
...observing multiple failures, which seem to be interconnected
- the test-setup with the planning chunk pre-roll is insufficient
- basically it is not possible to perform further concurrent planning,
without getting behind the actual schedule; at least in the setup
with DUMP print statements (which slowdown everything)
- muliple chained re-entrant calls into the planning function can result
- the **ASSERTION in the Allocator** was triggered again
- the log+stacktrace indicate that there **is still a Gap**
in the logic to protect the allocations via Grooming-Token
...causing the system to freeze due to excess memory allocation.
Fortunately it turned out this was not an error in the Scheduler core
or memory manager, but rather a sloppiness in the test scaffolding.
However, this incident highlights that the memory manager lacks some
sanity checks to prevent outright nonsensical allocation requests.
Moreover it became clear again that the allocation happens ''already before''
entering the Scheduler — and thus the existing sanity check comes too late.
Now I've used the same reasoning also for additional checks in the allocator,
limiting the Epoch increment to 3000 and the total memory allocation to 8GiB
Talking of Gibitbytes...
indeed we could use a shorthand notation for that purpose...
The scheduler implementation uses a randomised redistribution of
work capacity, taking into account the current ''scale'' of next pending event.
While this works surprisingly well overall, sometimes, in very tight and dense scheules
the workers seem to be spread somewhat too arbitrarily. Thus, if the scheduler
is working through a zone with several events as close as 1ms, often it takes
up to 3ms for another worker to show up.
With this change, the scattering range in the ''near zone'' (50µs ... 5ms)
is made dynamic, and now flexibly depends on current head time.
The closer the next event, the more tightly focussed will be the
capacity redistribution, if capacity becomes available just some 100µs
ahead of next demand, it is no longer „sent away“, but rather relocated
by roughly the same distance behind the next event.