- schedule can now be adapted to concurrency and expected distribution of runtimes
- additional stress factor to press the schedule (1.0 is nominal speed)
- observed run-time now without Scheduler start-up and pre-roll
- document and verify computed numbers
...based on the adapted time-factor sequence
implemented yesterday in TestChainLoad itself
- in this case, the TimeBase from the computation load is used as level speed
- this »base beat« is then modulated by the timing factor sequence
- working in an additional stress factor to press the schedule uniformly
- actual start time will be added as offset once the actual test commences
...up to now, we've relied on a regular schedule governed solely
by the progression of node levels, with a fixed level speed
defaulting to 1ms per level.
But in preparation of stress tesging, we need a schedule adapted
to the expected distribution of computation times, otherwise
we'll not be able to factor out the actual computation graph
connectivity. The goal is to establish a distinctive
**breaking point** when the scheduler is unable to cope with
the provided schedule.
The helper developed thus far produces a sequence of
weight factors per level, which could then be multiplied
with an actual delay base time to produce a concrete schedule.
These calculations, while simple, are difficult to understand;
recommended to use the values tabulated in this test together
with a `graphviz` rendering of the node graph (🠲 `printTopologyDOT()`)
The intention is to establish a theoretical limit for the expense,
given some degree of concurrency. In reality, the expense should always
be greater, since the time is not just split by the number of cores;
rather we need to chain up existing jobs of various weight on the available
cores (which is a special case of the box packing problem).
With this formula, an ideal weight factor can be determined for each level,
and then summing up the sequence of levels gives us a guess for a sensible
timing for the overall scheduler
...so IterExplorer got yet another processing layer,
which uses the grouping mechanics developed yesterday,
but is freely configurable through λ-Functions.
At actual usage sit in TestChainLoad, now only the actual
aggregation computation must be supplied, and follow-up computations
can now be chained up easily as further transformation layers.
Yesterday I've written a simple loop-based implementation of
a grouping aggregation to count the node weights per level.
Unfortunately it turns out we'll use several flavours of this
and we'd have to chain up postprocessing -- thus from a usage perspective
it would be better to have the same functionality packaged as interator pipeline.
This turns out to be surprisingly tricky and there is no suitable library
function available, which means I'll have to write one myself.
This changeset is the first step into this direction: reformulate
the simple for-loop into a demand-driven grouping iterator
...the idea is to use the sum of node weights per level
to create a schedule, which more closely reflects the distribution
of actual computation time. Hopefully such a schedule can then be
squeezed or stretched by a time factor to find out a ''breaking point'',
at which the Scheduler is no longer able to keep up.
In-depth investigation and reasoning highlighted another problem,
which could lead to memory corruption in rare cases; in the end
I found a solution by caching the ''address'' of the current Epoch
and re-validating this address on each Epoch-overflow.
After some difficulties getting any reliable measurement for a Release-build,
it turned out that this solution even ''improves performance by 22%''
Remark-1: the static blockFlow::Config prevents simple measurements by
just recompiling one translation unit; it is necessary to build the
relevant parts of Vault-layer with optimisation to get reliable numbers
Remark-2: performing a full non-DEBUG build highlighted two missing
header-inclusions to allow for the necessary template specialisations.
...discovered by during investigation of latest Scheduler failures.
The root of the problems is that block overflow can potentially trigger
expansion of the allocation pool. Under some circumstances, this on-the fly
allocation requires a rotation of index slots, thereby invalidating
existing iterators.
While such behaviour is not uncommon with storage data structures (see std::vector),
in this case it turns out problematic because due to performance considerations,
a usage pattern emerged which exploits re-using existing storage »Slots« with known
deadline. This optimisation seems to have significant leverage on the
planning jobs, which happen to allocated and arrange a whole strike of
Activities with similar deadlines.
One of these problem situations can easily be fixed, since it is triggered
through the iterator itself, using a delegate function to request a storage expansion,
at which point the iterator is able to re-link and fix its internal index.
This solution also has no tangible performance implications in optimised code.
Unfortunately there remains one obscure corner case where such an pool expansion
could also have invalidated other iterators, which are then used later to
attach dependency relations; even a partial fix for that problem seems
to cause considerable performance cost of about -14% in optimised code.
- now there can not be any direct dispatch anymore when entering events
- thus there is no decision logic at entrance anymore
- rather the work-function implementation moved down into Layer-2
- so add a unit-test like coverage there (integration in SchedulerService_test)
This amounts to a rather massive refactoring, prompted by the enduring problems
observed when pressing the scheduler. All the various glitches and (fixed) crashes
are related to the way how planning-jobs enter the schedule items,
which is also closely tied to the difficulties getting the locking
for planning-jobs correct.
The solution pursued hereby is to reorder the main avenues into the
scheduler implementation. There is now a streamlined main entrance,
which **always** enqueues only, allowing to omit most checks and
coordination. On the other hand, the complete coordination and dispatch
of the work capacity is now shifted down into the SchedulerCommutator,
thereby linking all coordination and access control close together
into a single implementation facility.
If this works out as intended
- several repeated checks on the Grooming-Token could be omitted (performance)
- the planning-job would no longer be able to loose / drop the Token,
thereby running enforcedly single-threaded (as was the original intention)
- since all planning effectively originates from planning-jobs, this
would allow to omit many safety barriers and complexities at the
scheduler entrance avenue, since now all entries just go into the queue.
WIP: tests pass compiler, but must be adapted / reworked
- fix mistake in schdule time for planning chunks (must use start, not end of chunk)
- allow to configure the heuristics for pre-roll (time reserved for planning a node)
...observing multiple failures, which seem to be interconnected
- the test-setup with the planning chunk pre-roll is insufficient
- basically it is not possible to perform further concurrent planning,
without getting behind the actual schedule; at least in the setup
with DUMP print statements (which slowdown everything)
- muliple chained re-entrant calls into the planning function can result
- the **ASSERTION in the Allocator** was triggered again
- the log+stacktrace indicate that there **is still a Gap**
in the logic to protect the allocations via Grooming-Token
...causing the system to freeze due to excess memory allocation.
Fortunately it turned out this was not an error in the Scheduler core
or memory manager, but rather a sloppiness in the test scaffolding.
However, this incident highlights that the memory manager lacks some
sanity checks to prevent outright nonsensical allocation requests.
Moreover it became clear again that the allocation happens ''already before''
entering the Scheduler — and thus the existing sanity check comes too late.
Now I've used the same reasoning also for additional checks in the allocator,
limiting the Epoch increment to 3000 and the total memory allocation to 8GiB
Talking of Gibitbytes...
indeed we could use a shorthand notation for that purpose...
while my basic assessment is still that contention will not play a significant
role given the expected real world usage scenario — when testing with
tighter schedule and rather short jobs (500µs), some phases of massive contention
can be observed, leading to significant slow-down of the test.
The major problem seems to be that extended phases of contention will
effectively cause several workers to remain in an active spinning-loop for
multiple microseconds, while also permanently reading the atomic lock.
Thus an adaptive scheme is introduced: after some repeated contention events,
workers now throttle down by themselves, with polling delays increased
with exponential stepping up to 2ms. This turns out to be surprisingly
effective and completely removes any observed delays in the test setup.
Turns out that we need to implemented fine grained and explicit handling logic
to ensure that Activity planning only ever happens protected by the Grooming-Token.
This is in accordance to the original design, which dictates that all management tasks
must be done in »management mode«, which can only be entered by a single thread at a time.
The underlying assumption is that the effort for management work is dwarfed in comparison
to any media calculation work.
However, in
5c6354882d
...I discovered an insidious border condition, an in an attempt to fix it,
I broke that fundamental assumpton. The problem arises from the fact that we
do want to expose a *public API* of the Scheduler. Even while this is only used
to ''seed'' a calculation stream, because any further planning- and management work
will be performed by the workers themselves (this is a design decision, we do not
employ a "scheduler thread")
Anyway, since the Scheduler API ''is'' public, ''someone from the outside'' could
invoke those functions, and — unaware of any Scheduler internals — will
automatically acquire the Grooming-Token, yet never release it,
leading to deadlock.
So we need a dedicated solution, which is hereby implemented as a
scoped guard: in the standard case, the caller is a management-job and
thus already holds the token (and nothing must be done). But in the
rare case of an »outsider«, this guard now ''transparently'' acquires
the token (possibly with a blocking wait) and ''drops it when leaving scope''
...this feature seems to be no longer necessary now;
leaving the actual implementation in-code for the time being,
but removed it from the public access API.
The last round of refactorings yielded significant improvements
- parallelisation now works as expected
- processing progresses closer to the schedule
- run time was reduced
The processing load for this test is tuned in a way to overload the
scheduler massively at the end -- the result must be correct non the less.
There was one notable glitch with an assertion failure from the memory manager.
Hopefully I can reproduce this by pressing and overloading the Scheduler more...
The rework from yesterday turned out to be effective ... unfortunately
a bit to much: since now late follow-up notifications take precedence,
a single worker tends to process the complete chain depth-first, because
the first chain will be followed and processed, even before the worker
was able to post the tasks for the other branches. Thus this single
worker is the only one to get a chance to proceed.
After some consideration, I am now leaning towards a fundamental change,
instead of just fixing some unfavourable behaviour pattern: while the
language semantics remains the same, the scheduler should no longer
directly dispatch into the next chain **from λ-post**. That is, whenever
a POST / NOTIFY is issued from the Activity-chain, the scheduler goes
through prioritisation.
This has further ramifications: we do not need a self-inhibition mechanism
any more (since now NOTIFY picks up the schedule time of the target).
With these changes, processing seems to proceed more smoothly,
albeit still with lots of contention on the Grooming token,
at least in the example structure tested here.
While the recent refactoring...
206c67cc
...was a step into the right direction, it pushed too hard,
overlooking the requirement to protect the scheduler contents
and thus all of the Activity-chains against concurrent modification.
Moreover, the recent solution still seems not quite orthogonal.
Thus the handling of notifications was thoroughly reworked:
- the explicit "double-dispatch" was removed, since actual usage
of the language indicates that we only need notifications to
Gate (and Hook), but not to any other conceivable Activity.
- thus it seems unnecessary to turn "notification" into some kind
of secondary work mode. Rather, it is folded as special case
into the regular dispatch.
This leads to new processing rules:
- a POST goes into λ-post (obviously... that's its meaning)
- a NOTIFY now passes its *target* into λ-post
- λ-post invokes ''dispatch''
- and **dispatching a Gate now implies to notify the Gate**
This greatly simplifies the »state machine« in the Activity-Language,
but also incurs some limitations (which seems adequate, since it is
now clear that we do not ''schedule'' or ''dispatch'' arbitrary
Activities — rather we'll do this only with POST and NOTIFY,
and all further processing happens by passing activation
along the chain, without involving the Scheduler)
use a feature of the Activity-Language prepared for this purpose:
self-Inhibition of the Chain. This prevents a prerequisite-NOTIFY
to trigger a complete chain of available tasks, before these tasks
have actually reached their nominal scheduling time.
This has the effect to align the computations much more strictly
with the defined schedule
The main (test) thread is kept in a blocking wait until the
planned schedule is completed. If however the schedule overruns,
the wake-up job could just be triggered prematurely.
This can easily be prevented by adding a dependency from the last
computation job to the wake-up job. If the computation somehow
flounders, the SAFETY_TIMEOUT (5s) will eventually raise
an exception to let the test fail cleanly (shutting down
the Scheduler automatically)
...it seems impossible to solve this conundrum other than by
opening a path to override a contextual deadline setting from
within the core Activity-Language logic.
This will be used in two cases
- when processing a explicitly coded POST (using deadline from the POST)
- after successfully opening a Gate by NOTIFY (using deadline from Gate)
All other cases can now supply Time::NEVER, thereby indicating that
the processing layer shall use contextual information (intersection
of the time intervals)
...this is an interesting test failure, which highlights inconsistencies
with handling of deadlines when processing follow-up from NOTIFY-triggers
There was also some fuzziness related to the ''meaning'' of λ-post,
leading to at least one superfluous POST invocation for each propagation;
fixing this does not solve the problem yet removes unnecessary overhead
and lock-contention
...playing around with the graph for the Scheduler integration test
...single threaded run time seemed to behave irregular
...but in fact it is very close to what can be expected
based on an ''averaged node weight''
Fortunately its very simple to add that into the existing node statistics
Basically this is all done and settled already: this is the `usageExample()`
from `TestChainLoadTest`. However, the focus is slightly different here:
We want a demonstration that the Scheduler can work flawlessly through
a massive load. Thus the plan is to use much more challenging parameters,
and then lean back and watch what happens....
...which turns out to be due to the DUMP-Statements,
which seem to create quite some contention on their own.
Test cases with very tight schedule will slip away then;
without print statement everything is GREEN now
This partially reverts commit 72f11549e6.
"Chain-Load: Scheduler instrumentation for observation"
Hint: revert this changeset to re-introduce the print statements for diagnostic
* added benchmark over synchronous execution as point of reference
* verified running times and execution pattern
* Scheduler **behaves as expected** for this example
- Generally speaking, the calibration uses current baseline settings;
- There are now two different load generation methods, thus both must be calibrated
- Performance contains some socked and non-linear effects, thus calibration
should be done close to the work point, which can be achieved by incremental
calibration until the error is < 5%
Interestingly, longer time-base values run slightly faster than predicted,
which is consistent with the expectation (socket cost). And using a larger
memory block increases time values, which is also plausible, since
cache effects will be diminishing
..initial gauging is a tricky subject,
since existing computer's performance spans a wide scale
Allowing
- pre calibration -98% .. +190%
- single run ±20%
- benchmark ±5%
...which can be deliberately attached (or not attached) to the
individual node invocation functor, allowing to study the effect
of actual load vs. zero-load and worker contention
Within Chain-Load, the infrastructure to add this crucial feature
is minimal: each node gets a `weight` parameter, which is assigned
using another RandomDraw-Rule (by default `weight==0`).
The actual computation load will be developed as a separate component
and tied in from the node calculation job functor.
...during development of the Chain-Load, it became clear that we'll often
need a collection of small trees rather than one huge graph. Thus a rule
for pruning nodes and finishing graphs was added. This has the consequence
that there might now be several exit nodes scattered all over the graph;
we still want one single global hash value to verify computations,
thus those exit hashes must now be picked up from the nodes and
combined into a single value.
All existing hash values hard coded into tests must be updated
...with this change, processing is ''ahead of schedule'' from the beginning,
which has the nice side effect that the problematic contention situation
with these very short computation jobs can not arise, and most of the schedule
is processed by a single worker.
Processing pattern is now pretty much as expected
This is a trick to get much better scheduling and timing guesses.
Instead of targeting a specific level, rather a fixed number of nodes
is processed in each chunk, yet still always processing complete levels.
The final level number to expect can be retrieved from the chain-load graph.
With this refactoring, we can now schedule a wake-up job precisely
after the expected completion of the last level
Scheduling a wake-up job behind the end of the planned schedule did the trick.
Sometimes there is ''strong contention'' immediately after full provision of the WorkForce,
but this seems to be as expected, since the »Jobs« currently used have no
actually relevant run time on their own. It is even more surprising that
the Capacity-control logic is able to cope with this situation in a matter
of just some milliseconds, bringing the average Lag at ~ 300µs
Invent a special JobFunctor...
- can be created / bound from a λ
- self-manages its storage on the heap
- can be invoked once, then discards itself
Intention is to pass such one-time actions to the Scheduler
to cause some ad-hoc transitions tied to curren circumstances;
a notable example will be the callback after load-test completion.
In the first draft version, a blocked Gate was handled by
»polling« the Gate regularly by scheduling a re-invocation
repeatedly into the future (by a stepping defined through
ExecutionCtx::getWaitDelay()).
Yet the further development of the Activity-Language indicates
that the ''Notification mechanism'' is sufficient to handle all
foreseeable aspects of dependency management. Consequently this
''Gate poling is no longer necessary,'' since on Notification
the Gate is automatically checked and the activation impulse
is immediately passed on; thus the re-scheduled check would
never get an opportunity actually to trigger the Gate; such
an active polling would only be necessary if the count down
latch in the Gate is changed by "external forces".
Moreover, the first Scheduler integration tests with TestChainLoad
indicate that the rescheduled polling can create a considerable
additional load when longer dependency chains miss one early
prerequisite, and this additional load (albeit processed
comparatively fast by the Scheduler) will be shifted along
needlessly for quite some time, until all of the activities
from the failed chain have passed their deadline. And what
is even more concerning, these useless checks have a tendency
to miss-focus the capacity management, as it seems there is
much work to do in a near horizon, which in fact may not be
the case altogether.
Thus the Gate implementation is now *changed to just SKIP*
when blocked. This helped to drastically improve the behaviour
of the Scheduler immediately after start-up -- further observation
indicated another adjustment: the first Tick-duty-cycle is now
shortened, because (after the additional "noise" from gate-rescheduling
was removed), the newly scaled-up work capacity has the tendency
to focus in the time horizon directly behind the first jobs added
to the timeline, which typically is now the first »Tick«.
ð¡ this leads to a recommendation, to arrange the first job-planning
chunk in such a way that the first actual work jobs appear in the area
between 5ms and 10ms after triggering the Scheduler start-up.Scheduler¡
The first complete integration test with Chain-Load
highlighted some difficulties with the overall load regulation:
- it works well in the standard case (but is possibly to eager to scale up)
- the scale-up sometimes needs several cycles to get "off the ground"
- when the first job is dispatched immediately instead of going
through the queue, the scheduler fails to boot up
two rather obvious bugfixes
(well, after watching the Scheduler in action...)
- the first planning-chunk needs an offset
- the future to block on must be setup before any dispatch happens
- prime diagnostics with the first time invocation
- print timings relative to this first invocation
- DUMP output to watch the crucial scheduling operations
... so this (finally) is the missing cornerstone
... traverse the calculation graph and generate render jobs
... provide a chunk-wise pre-planning of the next batch
... use a future to block the (test) thread until completed
- test setup without actual scheduler
- wire the callbacks such to verify
+ all nodes are touched
+ levels are processed to completion
+ the planning chunk stops at the expected level
+ all node dependencies are properly reported through the callbacks