...due to the decision to use a much simpler allocation scheme
to increase probability for actual savings, after switching the API
and removing all trading related aspects, a lot of further code is obsoleted
Notably this raises the difficult question,
whether to ensure **invocation of destructors**.
Not invoking dtors ''breaks one of the most fundamental contracts''
of the C++ language — yet the infrastructure to invoke dtors in such
a heterogeneous cluster of allocations creates a hugely significant
overhead and is bound to poison the caches (objects to be deallocated
typically sit in cold memory pages).
What makes this decision especially daunting is the fact that the
low-level-Model can be expected to be one of the largest systemic
data structures (letting aside the media buffers).
I am leaning towards a compromise: turn down this decision
towards the user of the `AllocationCluster`
After some analysis, it became clear that the existing code for
`AllocationCluster` (while in itself valid) will likely miss the point
for the expected usage in the low-level Model: most segments of the
model will be rather small, and thus there is not enough potential for
amortisation when using such a per-type and per-segment scheme;
a rather simplistic linear allocator will be sufficient.
On the other hand, with the current C++ standard it is easy to provide
a complient allocator implementation for STL containers, and thus the
interface should be retro-fitted accordingly.
At the time of the initial design attempts, I naively created a
classic interface to describe an fixed container allocated ''elsewhere.''
Meanwhile the C++ language has evolved and this whole idea looks
much more as if it could be a ''Concept'' (C++20). Moreover, having
several implementations of such a container interface is deemed inadequate,
since it would necessitate ''at least two indirections'' — while
going the Concept + Template route would allow to work without any
indirection, given our current understanding that the `ProcNode` itself
is ''not an interface'' — rather a building block.
- the starting point is the idea to build a dedicated ''turnout system''
- `StateAdapter`, `BuffTable` ⟶ `FeedManifold` and _Invocation_ will be fused
- actually, the `TurnoutSystem` will be ''pulled'' and orchestrate the invocation
- the structure is assumed to be recursive
The essence of the Node-Invocation, as developed 2009 / 2011 remains intact,
yet it will be organised along a clearer structure
Within the existing body of code, there are two unfinished attempts
towards building a node invocation and management of data buffers.
The first attempt was entirely driven from the angle of invoking a
processing function, while the second one draws from a wider scope
and can be considered the solution to build upon regarding data buffers
in general. However, the results of the first approach are well suited
for their specific purpose, so both solutions will be combined.
Thus the arrangement of data feeds going in and out of the render node
shall be renamed into `BuffTable` -> `FeedManifold`
...which seems to be basically fine thus far
...beyond some renaming and rearranging
''it turns out that the final, crucial links,
necessary to tie all together, are yet to be developed''
Facing quite some difficulties here, since there are (at least)
two abandoned past efforts towards building a render node network
in the code base; the structure and architecture decisions from these
previous attempts seem largely valid still, yet on a technical level,
the style of construction evolved considerably in the meantime. Moreover,
these old fragments of code, written during the early stages of the
project, were lacking clear goals and anchor points at places;
the situation is quite different now in this respect.
Sticking to well proven practice, the rework will be driven by a test setup,
and will progress over three steps with increasing levels of integration.
The initial effort of building a Scheduler can now be **considered complete**
Reaching this milestone required considerable time and effort, including
an extended series of tests to weld out obvious design and implementation flaws.
While the assessment of the new Scheduler's limitation and traits is ''far from complete,''
some basic achievements could be confirmed through this extended testing effort:
* the Scheduler is able to follow a given schedule effectively,
until close up to the load limit
* the ''stochastic load management'' causes some latency on isolated events,
in the order of magnitude < 5ms
* the Scheduler is susceptible to degradation through Contention
* as mitigation, the Scheduler prefers to reduce capacity in such a situation
* operating the Scheduler effectively thus requires a minimum job size of 2ms
* the ability for sustained operation under full nominal load has been confirmed
by performing **test sequences with over 80 seconds**
* beyond the mentioned latency (<5ms) and a typical turnaround of 100µs per job
(for debug builds), **no further significant overhead** was found.
Design, Implementation and Testing were documented extensively in the [https://lumiera.org/wiki/renderengine.html#Scheduler%20SchedulerProcessing%20SchedulerTest%20SchedulerWorker%20SchedulerMemory%20RenderActivity%20JobPlanningPipeline%20PlayProcess%20Rendering »TiddlyWiki« #Scheduler]
This test completes the stress-testing effort
and summarises the findings
* Scheduler performs within relevant parameter range without significant overhead
* Scheduler can operate with full load in stable state, with 100% correct result
The behaviour seems consistent and the schedule breaks at the expected point.
At first sight, concurrency seems slightly to low; detailed investigation
however shows that this is due to the structure of the load graph,
and in fact the run time comes close to optimal values.
the `BreakingPoint` tool conducts a binary search to find the ''stress factor''
where a given schedule breaks. There are some known deviations related to the
measurement setup, which unfortunately impact the interpretation of the
''stress factor'' scale. Earlier, an attempt was made, to watch those factors
empirically and work a ''form factor'' into the ''effective stress factor''
used to guide this measurement method.
Closer investigation with extended and elastic load patters now revealed
a strong tendency of the Scheduler to scale down the work resources when not
fully loaded. This may be mistaken by the above mentioned adjustments as a sign
of a structural limiation of the possible concurrency.
Thus, as a mitigation, those adjustments are now only performed at the
beginning of the measurement series, and also only when the stress factor
is high (implying that the scheduler is actually overloaded and thus has
no incentive for scaling down).
These observations indicate that the »Breaking Point« search must be taken
with a grain of salt: Especially when the test load does ''not'' contain
a high degree of inter dependencies, it will be ''stretched elastically''
rather than outright broken. And under such circumstances, this measurement
actually gauges the Scheduler's ability to comply to an established
load and computation goal.
...well — more of a logical contradiction, not so much a bug.
The underlying problematic situation arises when meanwhile the
Extent storage has been expanded, and especially the active slots
are in »wrapped state«. In this case, the newly allocated extents
must be rotated in, which invalidates existing index numbers.
This problem was amended by exploting a chaching mechanism, allowing
to re-attach and validate an index position still stored in an old
iterator; especially this can happen when attempting to attach a
follow-up dependency onto a job planned earlier, but not yet scheduled.
The problem here was an assertion failure, which was triggered with a
high probability; the fix for the problem detailed above used the yield()
function, while it actually was only interested in retrieving the
Extent's address to probe if the extent matches an known storage location.
The solution is to provide a dedicated function for this check, which
can then skip the sanity check (because in this case we do not want
to use the Extent, and thus can touch obsoleted/inactive Extents
without problem)
...this seems to be the last topic for this investigation of Scheduler behaviour;
the goal is to demonstrate readiness for stable-state operation over an extended period of time
- use parameters known to produce a clean linear model
- assert on properties of this linear model
Add extended documentation into the !TiddlyWiki,
with a textual account of the various findings,
also including some of the images and diagrams,
rendered as SVG
This amends test code, which was commented-out for some time,
and was affected by the changes in load-graph generation:
a983a506b
These changes typically lead to a simplified topology at the end
of the load graph, since open ends are no longer connected to a
single exit node. In the case here, level 27 is no longer generate,
and level 26 is now comprised of three nodes, two of them with load=2
In the end, I decided that it ''is to early to decide anything'' in this respect...
The actual situation encountered is a **Catch-22**:
* in its current form, the »Tick« handler detects compulsory jobs beyond deadline
* since such a Job ''must not be touched anymore,'' there is no way scheduling can proceed
* so this would constitute a ''Scheduler Emergency''
All fine — just the »Tick« handler ''itself is a compulsory job'' — and being a job, it can well be driven beyond its deadline. In fact this situation was encountered as part of stress testing.
Several mitigations or real solutions are conceivable, but in the end,
too little is known yet regarding the integration of the scheduler within the Engine
Thus I'll marked the problematic location and opened #1362
Investigate the behaviour over a wider range of job loads,
job count and worker pool sizes. Seemingly the processing
can not fully utilise the available worker pool capacity.
By inspection of trace-dumps, one impeding mechanism could
be identified: the »stickiness« of the contention mitigation.
Whenever a worker encounters repeated contention, it steps up
and adds more and more wait cycles to remove pressure from the
schedule coordination. As such this is fine and prevents further
degradation of performance by repeated atomic synchronisation.
However, this throttling was kept up needlessly after further
successful work-pulls. Since job times of several milliseconds
can be expected on average in media processing, such a long
retention would spread a performance degradation over a duration
of several frames. Thus, the scheme for step-down was changed
to decrease the throttling by a power series rather than just
documenting the level.
Use the statistic functions imported recently from Yoshimi-test
to compute a linear regression model as immediate test result.
Combining several measurement series, this allows to draw conclusions
about some generic traits and limitations of the scheduler.
Visual tweaks specific to this measurement setup
* include a numeric representation of the regression line
* include descriptive axis labels
* improve the key names to clarify their meaning
* heuristic code for the x-ticks
Package these customisations as a helper function into the measurement tool
After a lot of further tinkering, seemingly arriving at a
somewhat satisfactory solution for the layout and arrangement of
test definitions and especially the table for measurement series.
While the complete setup remains fragile indeed, and complexity is more
hidden than reduced — the pragmatic compromise established yesterday
at least allows to reduce the amount of boilerplate in the test or
measurement setup to make the actual specifics stand out clearly.
----
As an aside, the usage of the `DataFile` type imported from Yoshimi-test
recently was re-shaped more towards a generic handling of tabular data with
CSV storage option; thus renaming the type now into `DataTable`.
Persistent storage is now just one option, while another usage pattern
compounds observation data into table rows, which are then directly
rendered into a CSV string, e.g. for visualisation as Gnuplot graph.
Encountering ''just some design problems related to the test setup,''
which however turn out hard to overcome. Seems that, in my eagerness
to create a succinct and clear presentation of the test, I went into
danger territory, overstretching the abilities of the C++ language.
After working with a set of tools created step by step over an extended span of time,
''for me'' the machinations of this setup seem to be reduced to flipping a toggle
here and there, and I want to focus these active parts while laying out this test.
''This would require'' to create a system of nested scopes, while getting more and more
specific gradually, and moving to the individual case at question; notably any
clarification and definition within those inner focused contexts would have to be
picked up and linked in dynamically.
Yet the C++ language only allows to be ''either'' open and flexible towards
the actual types, or ''alternatively'' to select dynamically within a fixed
set of (virtual) methods, which then must be determined from the beginning.
It is not possible to tweak and adjust base definitions after the fact,
and it is not possible to fill in constant definitions dynamically
with late binding to some specific implementation type provided only
at current scope.
Seems that I am running against that brick wall over and over again,
piling up complexities driven by an desire for succinctness and clarity.
Now attempting to resolve this quite frustrating situation...
- fix the actual type of the TestChainLoad by a typedef in test context
- avoid the definitions (and thus the danger of shadowing)
and use one `testSetup()` method to place all local adjustments.
With the addition of a second tool `bench::ParameterRange`,
the setup of the test-context for measurement became confusing,
since the original scheme was mostly oriented towards the
''breaking point search.''
On close investigation, I discovered several redundancies, and
moreover, it seems questionable to generate an ''adapted-schedule''
for the Parameter-Range measurement method, which aims at overloading
the scheduler and watch the time to resolve such a load peak.
The solution entertained here is to move most of the schedule-ctx setup
into the base implementation, which is typically just inherited by the
actual testcase setup. This allows to leave the decision whether to build
an adapted schedule to the actual tool. So `bench::BreakingPoint` can
always setup the adapted schedule with a specific stress-factor,
while `bench::ParameterRange` by default does nothing in this
respect, and thus the `ScheduleCtx` will provide a default schedule
with the configured level-duration (and the default for this is
lowered to 200µs here).
In a similar vein, calculation of result data points from the raw measurement
is moved over into the actual test setup, thereby gaining flexibility.
Rework the existing tool to capture the measurement series
into the newly integrated CSV-based data storage, allowing
to turn the results into a Gnuplot-visualisation.
...which is added automatically whenever additional data columns are present
Result can only be verified visually
* the upper diagram should show the first fibonacci points
* a (correct) linear regression line should be overlayed in red
* below, a secondary diagram should appear, with aligned axis
* the row "one" in this diagram should be shown as impulses
* the further rows "two" and "three" should be drawn as
green points, using the secondary Y-axis (values 100-250)
* Gnuplot can handle missing data points
The idea is to build the Layout-branching into the generated Gnuplot script,
based on the number of data columns detected. If there is at least one further
data column, then the "mulitplot" layout will be used to feature this
additional data in a secondary diagram below with aligned axis;
if more than one additional data column is present, all further
visualisation will draw points, using the secondary Y-axis
Moreover, Gnuplot can calculate the linear regresssion line itself,
and the drawing will then be done using an `arrow` command,
defining a function regLine(x) based on the linear model.
- `forElse` belongs to the metaprogramming utils
- have a CSVLine, which is a string with custom appending mechanism
- this in turn allows CSVData to accept arbitrary sized tuples,
by rendering them into CSVLine
the metafunction `is_basically<X>` performed only an equality match,
while, given it's current usage, it should also include a subtype-interface-match.
This changes especially the `is_StringLike<S>` metafunction,
both on const references and on classes built on top of string
or string_view.
Whenever a class defines a single-arg templated constructor,
there is danger to shadow the auto-generated copy operations,
leading to insidious failures.
Some months ago, I did the ''obvious'' and added a tiny helper,
allowing to mask out the dangerous case when the ''single argument''
is actually the class itself (meaning, it is a copy invocation and
not meant to go through this templated ctor...
As this already turned out as tremendously helpful, I now extended
this helper to also cover cases where the problematic constructor
accepts variadic arguments, which is quite common with builder-helpers
The intention is to create a library of convenient building blocks;
providing a visualisation should be as simple as invoking a free function
with CSV data, yet with the ability to tweak some lables or display
variations if desired.
This can be achieved by..
* having a series of ready-made standard visualisations
* expose a function call for each, accepting a data-context builder
* provide secondary convenience shortcuts, which add some of the expected bindings
* notably a shortcut is provided to take the data as CSV-string
* augmented by a wrapper/builder to allow defining data points inline
Basically GenNode and the enclosed record were designed to be
immutable — yet some valid usage patterns emerged where gradually
building structure seems adequate — which can be accomodated by
entering a ''mutation mode'' explicitly through the Rec::Mutator.
Over time, a builder usage style came in widespread usage, especially
when building test data structures. There seems to be no deeper reason
preventing the Mutator from being ''moved'' — notwithstanding the fact
that using such a ''movable builder'' can be dangerous, especially
when digressing from the strict »fluent inline builder« usage style
and storing the mutator into a variable.
For populating the data context for a text template instantiation,
we have now a valid case where it seems helpful to partially populate
the context and then move it further down into an implementation
function, which does the bulk of the work.
Deliberately keep it unstructured and add dedicated functions
for each new emerging use case; hopefully some commen usage scheme
will emerge over time.
* Data is to be handed in as an iterator over CSV-strings.
* will have to find out about additional parametrisation on a case-by-case base
The default visuals of gnuplot are simple,
yet tend to look cluttered and are not well suited for our purpose
We need the following presentation
* a scatter diagram with a regression line
* additionally a secondary diagram stacked below, with aligned axis
Thus 🠲 R-T-F-M
* The [http://gnuplot.info/ Gnuplot docu] is exhaustive, yet hard to get into
* Helpful was this collection of [http://gnuplotting.org/ example solutions for scientific plots]
* and — Stackoverflow...
A minimalist `TextTemplate` engine is available for in-project use.
* supports only the bare minimum of features (no programming language)
* substitution of `${placeholder}` by key-name data access
* conditional section `${if key}...${end if}`
* iteration over a data sequence
* other then most solutions available as library,
this implementation does **not require** a specific data type,
nor does it invent a dynamic object system or JSON backend;
rather, a generic ''Data Source Adapter'' is used, which can
be specialised to access any kind of ''structured data''
* the following `DataSource` specialisations are provided
* `std::map<string,string>`
* Lumiera »External Tree Description« (based on `GenNode`)
* a string-based spec for testing
This extension is required to use GenNode as data source for text-template instantiation.
I am aware that such a function could counter the design intent for GenNode,
because it could be (ab)used to "just get the damn value" and then
parse back the results...
...turns out challenging, since our intention here
is borderline to the intended design of the Lumiera ETD.
It ''should work'' though, when combined with a Variant-visitor...
Document existing data binding logic and investigate in detail
what must be done to enable a similar binding backed by Lumiera's ETD structures.
This analysis highlights some tricky aspects, which can be accommodated by
slight adjustments and generalisations in the `TextTemplate` implementation
* `GenNode` is not structured string data, rather binary data
* thus exposing a std::string_view is not adequate, requiring to
pick up the result type from the actual data binding
* moreover, to allow for arbitrary nested scopes, a back-pointer
to the parent scope must be maintained, which requires stable memory locations.
This can best be solved within the InstanceCore itself, which manages
the actual hierarchy of data source references.
* the existing code happens already to fulfil this requirement, but
for sake of clarity, handling of such a nested scope is now extracted
into a dedicated operation, to highlight the guaranteed memory layout.
...hoped to keep it simple, but this is inevitable, since we
want to provide a CSV list as value within a list of key=value
bindings, and all packaged into a simple string for easy testing.
Thus the parsing RegExp just needs two branches for simple and quoted vals
We use a DataSrc<DAT> template to access the actual data to be substituted.
However, when applying the Text-Template, we need to pick the right
specialisation, based on the type of the actual data provided.
Here we face several challenges:
* Class-Template-Argument-Deduction starts from the *primary* template's constructors.
Without that, the compiler will only try the copy constructor and will
never see the constructors of partial specialisations.
This can be fixed by providing a ''dummy constructor''.
* The specifics of how to provide a custom CTAD deduction guide
for a **nested template** are not well documented. I have found
several bug reports, and seemingly one of these bugs failed my
my various attempts. Moreover it is ''not clear if such a deduction
guide can even be given outside of the class definition scope.''
For the intended usage pattern this would be crucial, since users
are expected to provide further specialisations of the DataSrc-template
* Thus I resorted to the ''old school solution,'' which is to use
a ''free builder function'' as an extension point. Thus users could
provide further overloads for the `buildDataSrc()` function.
* Unfortunately, SFINAE-Tricks are way more limited for function overload.
Thus it seems impossible to have a generic and more specialised cases,
unless all special cases are disjoint.
Thus the solution is far from perfect, ''yet for the current situation it seems
sufficient'' (and C++20 Concepts will greatly help to resolve this kind of problems)
...implemented by simply parsing the string into key=value pairs,
which are then stored into a shared map. The actual data binding
implementation can thus be inherited from the existing Map-binding