With the latest improvements, the »breaking point search« works as expected
and yields meaningful data; however — it seems to be well suited rather
for specific setups, which involve an extended graph with massive dependencies,
because only such a setup produces a clearly defined ''breaking point.''
Thus I'm considering to complement this research by another measurement setup
to establish a linear regression model of the Scheduler expense.
To allow integration of this different setup into the existing stress-test-rig,
some rearrangements of the builder notation are necessary; especially we need
to pass the type name of the actual tool, and it seems indicated to
reorder the source code to provide the config base class `StressRig`
at the top, followed by a long (and very technical) implementation
namespace.
It turns out to be not correct using all the divergence in concurrency
as a form factor, since it is quite common that not all cores can be active
at every level, given the structural constraints as dictated by the load graph.
On the other hand, if the empirical work (non wait-time) concurrency
systematically differs from the simple model used for establishing the schedule,
then this should indeed be considered a form factor and deduced from
the effective stress factor, since it is not a reserve available for speed-up
The solution entertained here is to derive an effective compounded sum
of weights from the calculation used to build the schedule. This compounded
weight sum is typically lower than the plain sum of all node weights, which
is precisely due to the theoretical amount of expense reduction assumed
in the schedule generation. So this gives us a handle at the theoretically
expected expense and through the plain weight sum, we may draw conclusion
about the effective concurrency expected in this schedule.
Taking only this part as base for the empirical deviations yields search results
very close to stressFactor ~1 -- implying that the test setup now
observes what was intended to observe...
In binary search, in order to establish the invariant initially,
a loop is necessary, since a single step might not be sufficient.
Moreover, the ongoing adjustments jeopardise detection of the
statistical breaking point condition, by causing a negative delta
due to gradually approaching the point of convergence -- leading
to an ongoing search in a region beyond the actual breaking point.
Various misconceptions identified in the feedback path of the test algorithm.
- statistics are cumulative, which must be incorporated by norming on time base
- average concurrency includes idle times, which is besides the point within this
test setup, since additional wait-phases are injected when reducing stress
Relying on the new instrumentation facility, the actually effective
concurrency and cumulative run time of the test jobs can be established.
These can now be cast into a form-factor to represent actual excess expenses
in relation to the theoretical model.
By allowing to adjust the adapted schedule by this form factor,
it can be made to reflect more closely the actual empiric load,
hopefully leading to a more realistic effect of the stress-factor
and thus results better suited to conclude on generic behaviour.
...turns out rather challenging to come up with any test case,
that is both meaningful, simple to setup and understand, yet still
produces somewhat stable values. `IncidenceCount` seems most valuable
for investigation and direct inspection of results
Various experiments to watch Scheduler behaviour under extended load.
Notably the example committed here makes the Scheduler run for 1.2 sec
and process 800 jobs with 10ms each, thereby putting the system into
100% load on all CPUs
- supplement the pre-dimensioning for data capture; without that,
sporadic memory corruption indeed happens (as expected, since
concurrent re-allocation of the vector with an entry for each
thread is not threadsafe, and this test shows much contention)
- add a top-level logging for better diagnostics of errors
emanating from the test run
After an extended break due to "real life issues"....
Pick up the investigation, with the goal to ascertain a valid definition
and understanding of all test parameters. A first step is to establish
a baseline ''without using a computational load''; this might be some kind
of base overhead of the scheduler.
However -- the way the test scaffolding was built, it is difficult to
create a feedback loop for the statistical test setup with binary search,
since it is not really clear how the single control parameter of the test algorithm,
the so called "stress factor", shall be interpreted and how it can be
combined with a base load.
An extended series of tests, while watching the observed value patterns qualitatively,
seems to corroborate the former results, indicating that the base expense
in my test setup (using a debug build) is at ~200µs / Node / core.
Yet the difficulty to interpret this result and arrive at a logical and generic model
prevents me from translating this into a measurement scheme, which can
be executed independently from a specific test setup and hardware
The goal is to devise a load more akin to the expected real-world processing patterns,
and then to increase the density to establish a breaking point.
Preliminary investigations focus on establishing the properties of this load
and to ensure the actual computation load behaves as expected.
Using the third Graph pattern prepared thus far, which produces
short chains of length=2, yet immediately spread out to maximum concurrency.
This leads to 5.8 Nodes / Level on average.
...as it turned out, this segfault was caused by flaws in the ScheduleCtx
used for generate the test-schedule; especially when all node-spreads are set
to zero and thus all jobs are scheduled immediately at t=0, there was a loophole
in the logic to set the dependencies for the final »wake-up« job.
When running such a schedule in the Stress-Test-Bench, the next measurement run
could be started due to a premature wake-up job, thereby overrunning the previous
test-run, which could be still in the middle of computations.
So this was not a bug in the Scheduler itself, yet something to take care of
later when programming the actual Job-Planning and schedule generation.
Search processing pattern for massive parallel test.
The goal is to get all cores into active processing most of the time,
thus we need a graph with low dependency management overhead, which is
also consistently wide horizontally to have several jobs in working state
all of the time. The investigation aims at finding out about systematic
overheads in such a setup.
This is just another (obvious) degree of freedom, which could be
interesting to explore in stress testing, while probably not of much
relevance in practice (if a job is expected to become runable earlier,
in can as well be just scheduled earlier).
Some experimentation shows that the timing measurements exhibit more
fluctuations, but also slightly better times when pressure is low, which
is pretty much what I'd expect. When raising pressure, the average
times converge towards the same time range as observed with time bound
propagation.
Note that enabling this variation requires to wire a boolean switch
over various layers of abstraction; arguably this is an unnecessary
complexity and could be retracted once the »experimentation phase«
is over.
This completes the preparation of a Scheduler Stress-Test setup.
The `volatile` was used asymmetrically and there was concern that
this usage makes the `ComutationalLoad` dependent on concurrency.
However, an impact could not be confirmed empirically.
Moreover, to simplify this kind of tests, make the `schedDepends`
directly configurable in the Stress-Test-Rig.
...watching those dumps on the example Graph with excessive dependencies
made blatantly clear that we're dispatching a lot of unnecessary jobs,
since the actual continuation is /always/ triggered by the dependency-NOTIFY.
Before the rework of NOTIFY-Handling, this was rather obscured, but now,
since the NOTIFY trigger itself is also dispatched by the Scheduler,
it ''must be this job'' which actually continues the calculation, since
the main job ''can not pass the gate'' before the dependency notification
arrives.
Thus I've now added a variation to the test setup where all these duplicate
jobs are simply omitted. And, as expected, the computation runs faster
and with less signs of contention. Together with the other additional
parameter (the base expense) we might now actually be able to narrow down
on the observation of a ''expense socket'', which can then be
attributed to something like an ''inherent scheduler overhead''
...actually difficult to integrate into the existing scheme,
which is entirely level-based. Can only be added to the individual Jobs,
not to the planning and completion-jobs — which actually shouldn't be a problem,
since it is beneficial to dispatch the planning runs earlier
The next goal is to determine basic performance characteristics
of the Scheduler implementation written thus far;
to help with these investigations some added flexibility seems expedient
- the ability to define a per-job base expense
- added flexibility regarding the scheduling of dependencies
This changeset introduces configuration options
While the idea with capturing observation values is nice,
it definitively does not belong into a library impl of the
search algorithm, because this is usage specific and grossly
complicates the invocation.
Rather, observation data can be captured by side-effect
from the probe-λ holding the actual measurement run.
- Result found in typically 6-7 steps;
- running 20 instead of 30 samples seems sufficient
Breaking point in this example at stress-Factor 0.47 with run-time 39ms
This statistical criterion defines when to count observed Scheduler performance
as loosing control. The test is comprised of three observations, which
all must be confirmed:
- an individual run counts as accidentally failed when the execution slips
away by more than 2ms with respect to the defined overall schedule.
When more than 55% of all observed runs are considered as failed,
the first condition is met
- moreover, the observed standard derivation must also surpass the
same limit of > 2ms, which indicates that the Scheduling mechanism
is under substantial strain (on average, the slip is ~ 200µs)
- the third condition is that the ''averaged delta'' has surpassed
4ms, which is 2 times the basic failure indicator.
These conditions are based on watching the Scheduler in operation;
typically all three conditions slip away by large margin after a
very narrow yet critical increase in the stress level.
Using three conditions together should improve robustness; often
the problems to keep up with the schedule build up over some parameter
range, yet the actual decision should be based on complete loss of control.
adapt the code written yesterday explicitly for the test case
into the new framework for performing a stress-test run.
Notable difference: times converted to millisecond immediately
Elaborate the draft to include all the elements used directly in the test case thus far;
the goal is to introduce some structuring and leave room for flexible confguration,
while implementing the actual binary search as library function over Lambdas.
My expectation is to write a series of individual test instances with varying parameters;
while it seems possible to add further performance test variations into that scheme later on.
- the goal is to run a binary search
- the search condition should be factored out
- thus some kind of framework or DSL is required,
to separate the technicalities of the measurement
from the specifics of the actual test case.
- repeated invocations of the same test setup for statistics
- the usual nasty 64-node graph with massive fork out
- limit concurrency to 4 cores
- tabulate data to look for clues regarding a trigger criteria
Hypothesis: The Scheduler slips off schedule when all of the
following three criteria are met:
- more than 55% glitches with Δ > 2ms
- σ > 2ms
- ∅Δ > 4ms
...this one was quite silly: obviously we need a separate instance
of the memory block ''per invocation'', otherwise concurrent invocations
would corrupt each other's allocation. The whole point of this variant
of the computation-load is to access a ''private'' memory block...
- schedule can now be adapted to concurrency and expected distribution of runtimes
- additional stress factor to press the schedule (1.0 is nominal speed)
- observed run-time now without Scheduler start-up and pre-roll
- document and verify computed numbers
...based on the adapted time-factor sequence
implemented yesterday in TestChainLoad itself
- in this case, the TimeBase from the computation load is used as level speed
- this »base beat« is then modulated by the timing factor sequence
- working in an additional stress factor to press the schedule uniformly
- actual start time will be added as offset once the actual test commences
...up to now, we've relied on a regular schedule governed solely
by the progression of node levels, with a fixed level speed
defaulting to 1ms per level.
But in preparation of stress tesging, we need a schedule adapted
to the expected distribution of computation times, otherwise
we'll not be able to factor out the actual computation graph
connectivity. The goal is to establish a distinctive
**breaking point** when the scheduler is unable to cope with
the provided schedule.
The helper developed thus far produces a sequence of
weight factors per level, which could then be multiplied
with an actual delay base time to produce a concrete schedule.
These calculations, while simple, are difficult to understand;
recommended to use the values tabulated in this test together
with a `graphviz` rendering of the node graph (🠲 `printTopologyDOT()`)
The intention is to establish a theoretical limit for the expense,
given some degree of concurrency. In reality, the expense should always
be greater, since the time is not just split by the number of cores;
rather we need to chain up existing jobs of various weight on the available
cores (which is a special case of the box packing problem).
With this formula, an ideal weight factor can be determined for each level,
and then summing up the sequence of levels gives us a guess for a sensible
timing for the overall scheduler
...so IterExplorer got yet another processing layer,
which uses the grouping mechanics developed yesterday,
but is freely configurable through λ-Functions.
At actual usage sit in TestChainLoad, now only the actual
aggregation computation must be supplied, and follow-up computations
can now be chained up easily as further transformation layers.
Yesterday I've written a simple loop-based implementation of
a grouping aggregation to count the node weights per level.
Unfortunately it turns out we'll use several flavours of this
and we'd have to chain up postprocessing -- thus from a usage perspective
it would be better to have the same functionality packaged as interator pipeline.
This turns out to be surprisingly tricky and there is no suitable library
function available, which means I'll have to write one myself.
This changeset is the first step into this direction: reformulate
the simple for-loop into a demand-driven grouping iterator
...the idea is to use the sum of node weights per level
to create a schedule, which more closely reflects the distribution
of actual computation time. Hopefully such a schedule can then be
squeezed or stretched by a time factor to find out a ''breaking point'',
at which the Scheduler is no longer able to keep up.
In-depth investigation and reasoning highlighted another problem,
which could lead to memory corruption in rare cases; in the end
I found a solution by caching the ''address'' of the current Epoch
and re-validating this address on each Epoch-overflow.
After some difficulties getting any reliable measurement for a Release-build,
it turned out that this solution even ''improves performance by 22%''
Remark-1: the static blockFlow::Config prevents simple measurements by
just recompiling one translation unit; it is necessary to build the
relevant parts of Vault-layer with optimisation to get reliable numbers
Remark-2: performing a full non-DEBUG build highlighted two missing
header-inclusions to allow for the necessary template specialisations.
...discovered by during investigation of latest Scheduler failures.
The root of the problems is that block overflow can potentially trigger
expansion of the allocation pool. Under some circumstances, this on-the fly
allocation requires a rotation of index slots, thereby invalidating
existing iterators.
While such behaviour is not uncommon with storage data structures (see std::vector),
in this case it turns out problematic because due to performance considerations,
a usage pattern emerged which exploits re-using existing storage »Slots« with known
deadline. This optimisation seems to have significant leverage on the
planning jobs, which happen to allocated and arrange a whole strike of
Activities with similar deadlines.
One of these problem situations can easily be fixed, since it is triggered
through the iterator itself, using a delegate function to request a storage expansion,
at which point the iterator is able to re-link and fix its internal index.
This solution also has no tangible performance implications in optimised code.
Unfortunately there remains one obscure corner case where such an pool expansion
could also have invalidated other iterators, which are then used later to
attach dependency relations; even a partial fix for that problem seems
to cause considerable performance cost of about -14% in optimised code.
- now there can not be any direct dispatch anymore when entering events
- thus there is no decision logic at entrance anymore
- rather the work-function implementation moved down into Layer-2
- so add a unit-test like coverage there (integration in SchedulerService_test)
This amounts to a rather massive refactoring, prompted by the enduring problems
observed when pressing the scheduler. All the various glitches and (fixed) crashes
are related to the way how planning-jobs enter the schedule items,
which is also closely tied to the difficulties getting the locking
for planning-jobs correct.
The solution pursued hereby is to reorder the main avenues into the
scheduler implementation. There is now a streamlined main entrance,
which **always** enqueues only, allowing to omit most checks and
coordination. On the other hand, the complete coordination and dispatch
of the work capacity is now shifted down into the SchedulerCommutator,
thereby linking all coordination and access control close together
into a single implementation facility.
If this works out as intended
- several repeated checks on the Grooming-Token could be omitted (performance)
- the planning-job would no longer be able to loose / drop the Token,
thereby running enforcedly single-threaded (as was the original intention)
- since all planning effectively originates from planning-jobs, this
would allow to omit many safety barriers and complexities at the
scheduler entrance avenue, since now all entries just go into the queue.
WIP: tests pass compiler, but must be adapted / reworked
- fix mistake in schdule time for planning chunks (must use start, not end of chunk)
- allow to configure the heuristics for pre-roll (time reserved for planning a node)
...observing multiple failures, which seem to be interconnected
- the test-setup with the planning chunk pre-roll is insufficient
- basically it is not possible to perform further concurrent planning,
without getting behind the actual schedule; at least in the setup
with DUMP print statements (which slowdown everything)
- muliple chained re-entrant calls into the planning function can result
- the **ASSERTION in the Allocator** was triggered again
- the log+stacktrace indicate that there **is still a Gap**
in the logic to protect the allocations via Grooming-Token
...causing the system to freeze due to excess memory allocation.
Fortunately it turned out this was not an error in the Scheduler core
or memory manager, but rather a sloppiness in the test scaffolding.
However, this incident highlights that the memory manager lacks some
sanity checks to prevent outright nonsensical allocation requests.
Moreover it became clear again that the allocation happens ''already before''
entering the Scheduler — and thus the existing sanity check comes too late.
Now I've used the same reasoning also for additional checks in the allocator,
limiting the Epoch increment to 3000 and the total memory allocation to 8GiB
Talking of Gibitbytes...
indeed we could use a shorthand notation for that purpose...
while my basic assessment is still that contention will not play a significant
role given the expected real world usage scenario — when testing with
tighter schedule and rather short jobs (500µs), some phases of massive contention
can be observed, leading to significant slow-down of the test.
The major problem seems to be that extended phases of contention will
effectively cause several workers to remain in an active spinning-loop for
multiple microseconds, while also permanently reading the atomic lock.
Thus an adaptive scheme is introduced: after some repeated contention events,
workers now throttle down by themselves, with polling delays increased
with exponential stepping up to 2ms. This turns out to be surprisingly
effective and completely removes any observed delays in the test setup.
Turns out that we need to implemented fine grained and explicit handling logic
to ensure that Activity planning only ever happens protected by the Grooming-Token.
This is in accordance to the original design, which dictates that all management tasks
must be done in »management mode«, which can only be entered by a single thread at a time.
The underlying assumption is that the effort for management work is dwarfed in comparison
to any media calculation work.
However, in
5c6354882d
...I discovered an insidious border condition, an in an attempt to fix it,
I broke that fundamental assumpton. The problem arises from the fact that we
do want to expose a *public API* of the Scheduler. Even while this is only used
to ''seed'' a calculation stream, because any further planning- and management work
will be performed by the workers themselves (this is a design decision, we do not
employ a "scheduler thread")
Anyway, since the Scheduler API ''is'' public, ''someone from the outside'' could
invoke those functions, and — unaware of any Scheduler internals — will
automatically acquire the Grooming-Token, yet never release it,
leading to deadlock.
So we need a dedicated solution, which is hereby implemented as a
scoped guard: in the standard case, the caller is a management-job and
thus already holds the token (and nothing must be done). But in the
rare case of an »outsider«, this guard now ''transparently'' acquires
the token (possibly with a blocking wait) and ''drops it when leaving scope''
...this feature seems to be no longer necessary now;
leaving the actual implementation in-code for the time being,
but removed it from the public access API.