Relying on the new instrumentation facility, the actually effective
concurrency and cumulative run time of the test jobs can be established.
These can now be cast into a form-factor to represent actual excess expenses
in relation to the theoretical model.
By allowing to adjust the adapted schedule by this form factor,
it can be made to reflect more closely the actual empiric load,
hopefully leading to a more realistic effect of the stress-factor
and thus results better suited to conclude on generic behaviour.
Various experiments to watch Scheduler behaviour under extended load.
Notably the example committed here makes the Scheduler run for 1.2 sec
and process 800 jobs with 10ms each, thereby putting the system into
100% load on all CPUs
- supplement the pre-dimensioning for data capture; without that,
sporadic memory corruption indeed happens (as expected, since
concurrent re-allocation of the vector with an entry for each
thread is not threadsafe, and this test shows much contention)
- add a top-level logging for better diagnostics of errors
emanating from the test run
After an extended break due to "real life issues"....
Pick up the investigation, with the goal to ascertain a valid definition
and understanding of all test parameters. A first step is to establish
a baseline ''without using a computational load''; this might be some kind
of base overhead of the scheduler.
However -- the way the test scaffolding was built, it is difficult to
create a feedback loop for the statistical test setup with binary search,
since it is not really clear how the single control parameter of the test algorithm,
the so called "stress factor", shall be interpreted and how it can be
combined with a base load.
An extended series of tests, while watching the observed value patterns qualitatively,
seems to corroborate the former results, indicating that the base expense
in my test setup (using a debug build) is at ~200µs / Node / core.
Yet the difficulty to interpret this result and arrive at a logical and generic model
prevents me from translating this into a measurement scheme, which can
be executed independently from a specific test setup and hardware
The goal is to devise a load more akin to the expected real-world processing patterns,
and then to increase the density to establish a breaking point.
Preliminary investigations focus on establishing the properties of this load
and to ensure the actual computation load behaves as expected.
Using the third Graph pattern prepared thus far, which produces
short chains of length=2, yet immediately spread out to maximum concurrency.
This leads to 5.8 Nodes / Level on average.
...as it turned out, this segfault was caused by flaws in the ScheduleCtx
used for generate the test-schedule; especially when all node-spreads are set
to zero and thus all jobs are scheduled immediately at t=0, there was a loophole
in the logic to set the dependencies for the final »wake-up« job.
When running such a schedule in the Stress-Test-Bench, the next measurement run
could be started due to a premature wake-up job, thereby overrunning the previous
test-run, which could be still in the middle of computations.
So this was not a bug in the Scheduler itself, yet something to take care of
later when programming the actual Job-Planning and schedule generation.
This is just another (obvious) degree of freedom, which could be
interesting to explore in stress testing, while probably not of much
relevance in practice (if a job is expected to become runable earlier,
in can as well be just scheduled earlier).
Some experimentation shows that the timing measurements exhibit more
fluctuations, but also slightly better times when pressure is low, which
is pretty much what I'd expect. When raising pressure, the average
times converge towards the same time range as observed with time bound
propagation.
Note that enabling this variation requires to wire a boolean switch
over various layers of abstraction; arguably this is an unnecessary
complexity and could be retracted once the »experimentation phase«
is over.
This completes the preparation of a Scheduler Stress-Test setup.
The `volatile` was used asymmetrically and there was concern that
this usage makes the `ComutationalLoad` dependent on concurrency.
However, an impact could not be confirmed empirically.
Moreover, to simplify this kind of tests, make the `schedDepends`
directly configurable in the Stress-Test-Rig.
...watching those dumps on the example Graph with excessive dependencies
made blatantly clear that we're dispatching a lot of unnecessary jobs,
since the actual continuation is /always/ triggered by the dependency-NOTIFY.
Before the rework of NOTIFY-Handling, this was rather obscured, but now,
since the NOTIFY trigger itself is also dispatched by the Scheduler,
it ''must be this job'' which actually continues the calculation, since
the main job ''can not pass the gate'' before the dependency notification
arrives.
Thus I've now added a variation to the test setup where all these duplicate
jobs are simply omitted. And, as expected, the computation runs faster
and with less signs of contention. Together with the other additional
parameter (the base expense) we might now actually be able to narrow down
on the observation of a ''expense socket'', which can then be
attributed to something like an ''inherent scheduler overhead''
...actually difficult to integrate into the existing scheme,
which is entirely level-based. Can only be added to the individual Jobs,
not to the planning and completion-jobs — which actually shouldn't be a problem,
since it is beneficial to dispatch the planning runs earlier
The next goal is to determine basic performance characteristics
of the Scheduler implementation written thus far;
to help with these investigations some added flexibility seems expedient
- the ability to define a per-job base expense
- added flexibility regarding the scheduling of dependencies
This changeset introduces configuration options
Elaborate the draft to include all the elements used directly in the test case thus far;
the goal is to introduce some structuring and leave room for flexible confguration,
while implementing the actual binary search as library function over Lambdas.
My expectation is to write a series of individual test instances with varying parameters;
while it seems possible to add further performance test variations into that scheme later on.
- the goal is to run a binary search
- the search condition should be factored out
- thus some kind of framework or DSL is required,
to separate the technicalities of the measurement
from the specifics of the actual test case.
- repeated invocations of the same test setup for statistics
- the usual nasty 64-node graph with massive fork out
- limit concurrency to 4 cores
- tabulate data to look for clues regarding a trigger criteria
Hypothesis: The Scheduler slips off schedule when all of the
following three criteria are met:
- more than 55% glitches with Δ > 2ms
- σ > 2ms
- ∅Δ > 4ms
...this one was quite silly: obviously we need a separate instance
of the memory block ''per invocation'', otherwise concurrent invocations
would corrupt each other's allocation. The whole point of this variant
of the computation-load is to access a ''private'' memory block...
- schedule can now be adapted to concurrency and expected distribution of runtimes
- additional stress factor to press the schedule (1.0 is nominal speed)
- observed run-time now without Scheduler start-up and pre-roll
- document and verify computed numbers
...based on the adapted time-factor sequence
implemented yesterday in TestChainLoad itself
- in this case, the TimeBase from the computation load is used as level speed
- this »base beat« is then modulated by the timing factor sequence
- working in an additional stress factor to press the schedule uniformly
- actual start time will be added as offset once the actual test commences
...up to now, we've relied on a regular schedule governed solely
by the progression of node levels, with a fixed level speed
defaulting to 1ms per level.
But in preparation of stress tesging, we need a schedule adapted
to the expected distribution of computation times, otherwise
we'll not be able to factor out the actual computation graph
connectivity. The goal is to establish a distinctive
**breaking point** when the scheduler is unable to cope with
the provided schedule.
The helper developed thus far produces a sequence of
weight factors per level, which could then be multiplied
with an actual delay base time to produce a concrete schedule.
These calculations, while simple, are difficult to understand;
recommended to use the values tabulated in this test together
with a `graphviz` rendering of the node graph (🠲 `printTopologyDOT()`)
The intention is to establish a theoretical limit for the expense,
given some degree of concurrency. In reality, the expense should always
be greater, since the time is not just split by the number of cores;
rather we need to chain up existing jobs of various weight on the available
cores (which is a special case of the box packing problem).
With this formula, an ideal weight factor can be determined for each level,
and then summing up the sequence of levels gives us a guess for a sensible
timing for the overall scheduler
...so IterExplorer got yet another processing layer,
which uses the grouping mechanics developed yesterday,
but is freely configurable through λ-Functions.
At actual usage sit in TestChainLoad, now only the actual
aggregation computation must be supplied, and follow-up computations
can now be chained up easily as further transformation layers.
Yesterday I've written a simple loop-based implementation of
a grouping aggregation to count the node weights per level.
Unfortunately it turns out we'll use several flavours of this
and we'd have to chain up postprocessing -- thus from a usage perspective
it would be better to have the same functionality packaged as interator pipeline.
This turns out to be surprisingly tricky and there is no suitable library
function available, which means I'll have to write one myself.
This changeset is the first step into this direction: reformulate
the simple for-loop into a demand-driven grouping iterator
...the idea is to use the sum of node weights per level
to create a schedule, which more closely reflects the distribution
of actual computation time. Hopefully such a schedule can then be
squeezed or stretched by a time factor to find out a ''breaking point'',
at which the Scheduler is no longer able to keep up.
In-depth investigation and reasoning highlighted another problem,
which could lead to memory corruption in rare cases; in the end
I found a solution by caching the ''address'' of the current Epoch
and re-validating this address on each Epoch-overflow.
After some difficulties getting any reliable measurement for a Release-build,
it turned out that this solution even ''improves performance by 22%''
Remark-1: the static blockFlow::Config prevents simple measurements by
just recompiling one translation unit; it is necessary to build the
relevant parts of Vault-layer with optimisation to get reliable numbers
Remark-2: performing a full non-DEBUG build highlighted two missing
header-inclusions to allow for the necessary template specialisations.
- fix mistake in schdule time for planning chunks (must use start, not end of chunk)
- allow to configure the heuristics for pre-roll (time reserved for planning a node)
...observing multiple failures, which seem to be interconnected
- the test-setup with the planning chunk pre-roll is insufficient
- basically it is not possible to perform further concurrent planning,
without getting behind the actual schedule; at least in the setup
with DUMP print statements (which slowdown everything)
- muliple chained re-entrant calls into the planning function can result
- the **ASSERTION in the Allocator** was triggered again
- the log+stacktrace indicate that there **is still a Gap**
in the logic to protect the allocations via Grooming-Token
...causing the system to freeze due to excess memory allocation.
Fortunately it turned out this was not an error in the Scheduler core
or memory manager, but rather a sloppiness in the test scaffolding.
However, this incident highlights that the memory manager lacks some
sanity checks to prevent outright nonsensical allocation requests.
Moreover it became clear again that the allocation happens ''already before''
entering the Scheduler — and thus the existing sanity check comes too late.
Now I've used the same reasoning also for additional checks in the allocator,
limiting the Epoch increment to 3000 and the total memory allocation to 8GiB
Talking of Gibitbytes...
indeed we could use a shorthand notation for that purpose...
...this feature seems to be no longer necessary now;
leaving the actual implementation in-code for the time being,
but removed it from the public access API.
The last round of refactorings yielded significant improvements
- parallelisation now works as expected
- processing progresses closer to the schedule
- run time was reduced
The processing load for this test is tuned in a way to overload the
scheduler massively at the end -- the result must be correct non the less.
There was one notable glitch with an assertion failure from the memory manager.
Hopefully I can reproduce this by pressing and overloading the Scheduler more...
use a feature of the Activity-Language prepared for this purpose:
self-Inhibition of the Chain. This prevents a prerequisite-NOTIFY
to trigger a complete chain of available tasks, before these tasks
have actually reached their nominal scheduling time.
This has the effect to align the computations much more strictly
with the defined schedule
The main (test) thread is kept in a blocking wait until the
planned schedule is completed. If however the schedule overruns,
the wake-up job could just be triggered prematurely.
This can easily be prevented by adding a dependency from the last
computation job to the wake-up job. If the computation somehow
flounders, the SAFETY_TIMEOUT (5s) will eventually raise
an exception to let the test fail cleanly (shutting down
the Scheduler automatically)
...this is an interesting test failure, which highlights inconsistencies
with handling of deadlines when processing follow-up from NOTIFY-triggers
There was also some fuzziness related to the ''meaning'' of λ-post,
leading to at least one superfluous POST invocation for each propagation;
fixing this does not solve the problem yet removes unnecessary overhead
and lock-contention
...playing around with the graph for the Scheduler integration test
...single threaded run time seemed to behave irregular
...but in fact it is very close to what can be expected
based on an ''averaged node weight''
Fortunately its very simple to add that into the existing node statistics
This partially reverts commit 72f11549e6.
"Chain-Load: Scheduler instrumentation for observation"
Hint: revert this changeset to re-introduce the print statements for diagnostic
* added benchmark over synchronous execution as point of reference
* verified running times and execution pattern
* Scheduler **behaves as expected** for this example
- Generally speaking, the calibration uses current baseline settings;
- There are now two different load generation methods, thus both must be calibrated
- Performance contains some socked and non-linear effects, thus calibration
should be done close to the work point, which can be achieved by incremental
calibration until the error is < 5%
Interestingly, longer time-base values run slightly faster than predicted,
which is consistent with the expectation (socket cost). And using a larger
memory block increases time values, which is also plausible, since
cache effects will be diminishing
..initial gauging is a tricky subject,
since existing computer's performance spans a wide scale
Allowing
- pre calibration -98% .. +190%
- single run ±20%
- benchmark ±5%
...which can be deliberately attached (or not attached) to the
individual node invocation functor, allowing to study the effect
of actual load vs. zero-load and worker contention
Within Chain-Load, the infrastructure to add this crucial feature
is minimal: each node gets a `weight` parameter, which is assigned
using another RandomDraw-Rule (by default `weight==0`).
The actual computation load will be developed as a separate component
and tied in from the node calculation job functor.
...during development of the Chain-Load, it became clear that we'll often
need a collection of small trees rather than one huge graph. Thus a rule
for pruning nodes and finishing graphs was added. This has the consequence
that there might now be several exit nodes scattered all over the graph;
we still want one single global hash value to verify computations,
thus those exit hashes must now be picked up from the nodes and
combined into a single value.
All existing hash values hard coded into tests must be updated
...with this change, processing is ''ahead of schedule'' from the beginning,
which has the nice side effect that the problematic contention situation
with these very short computation jobs can not arise, and most of the schedule
is processed by a single worker.
Processing pattern is now pretty much as expected
This is a trick to get much better scheduling and timing guesses.
Instead of targeting a specific level, rather a fixed number of nodes
is processed in each chunk, yet still always processing complete levels.
The final level number to expect can be retrieved from the chain-load graph.
With this refactoring, we can now schedule a wake-up job precisely
after the expected completion of the last level
Scheduling a wake-up job behind the end of the planned schedule did the trick.
Sometimes there is ''strong contention'' immediately after full provision of the WorkForce,
but this seems to be as expected, since the »Jobs« currently used have no
actually relevant run time on their own. It is even more surprising that
the Capacity-control logic is able to cope with this situation in a matter
of just some milliseconds, bringing the average Lag at ~ 300µs
two rather obvious bugfixes
(well, after watching the Scheduler in action...)
- the first planning-chunk needs an offset
- the future to block on must be setup before any dispatch happens
- prime diagnostics with the first time invocation
- print timings relative to this first invocation
- DUMP output to watch the crucial scheduling operations