Search processing pattern for massive parallel test.
The goal is to get all cores into active processing most of the time,
thus we need a graph with low dependency management overhead, which is
also consistently wide horizontally to have several jobs in working state
all of the time. The investigation aims at finding out about systematic
overheads in such a setup.
This is just another (obvious) degree of freedom, which could be
interesting to explore in stress testing, while probably not of much
relevance in practice (if a job is expected to become runable earlier,
in can as well be just scheduled earlier).
Some experimentation shows that the timing measurements exhibit more
fluctuations, but also slightly better times when pressure is low, which
is pretty much what I'd expect. When raising pressure, the average
times converge towards the same time range as observed with time bound
propagation.
Note that enabling this variation requires to wire a boolean switch
over various layers of abstraction; arguably this is an unnecessary
complexity and could be retracted once the »experimentation phase«
is over.
This completes the preparation of a Scheduler Stress-Test setup.
The `volatile` was used asymmetrically and there was concern that
this usage makes the `ComutationalLoad` dependent on concurrency.
However, an impact could not be confirmed empirically.
Moreover, to simplify this kind of tests, make the `schedDepends`
directly configurable in the Stress-Test-Rig.
- Result found in typically 6-7 steps;
- running 20 instead of 30 samples seems sufficient
Breaking point in this example at stress-Factor 0.47 with run-time 39ms
Elaborate the draft to include all the elements used directly in the test case thus far;
the goal is to introduce some structuring and leave room for flexible confguration,
while implementing the actual binary search as library function over Lambdas.
My expectation is to write a series of individual test instances with varying parameters;
while it seems possible to add further performance test variations into that scheme later on.
- the goal is to run a binary search
- the search condition should be factored out
- thus some kind of framework or DSL is required,
to separate the technicalities of the measurement
from the specifics of the actual test case.
- repeated invocations of the same test setup for statistics
- the usual nasty 64-node graph with massive fork out
- limit concurrency to 4 cores
- tabulate data to look for clues regarding a trigger criteria
Hypothesis: The Scheduler slips off schedule when all of the
following three criteria are met:
- more than 55% glitches with Δ > 2ms
- σ > 2ms
- ∅Δ > 4ms
...this one was quite silly: obviously we need a separate instance
of the memory block ''per invocation'', otherwise concurrent invocations
would corrupt each other's allocation. The whole point of this variant
of the computation-load is to access a ''private'' memory block...
- schedule can now be adapted to concurrency and expected distribution of runtimes
- additional stress factor to press the schedule (1.0 is nominal speed)
- observed run-time now without Scheduler start-up and pre-roll
- document and verify computed numbers
...based on the adapted time-factor sequence
implemented yesterday in TestChainLoad itself
- in this case, the TimeBase from the computation load is used as level speed
- this »base beat« is then modulated by the timing factor sequence
- working in an additional stress factor to press the schedule uniformly
- actual start time will be added as offset once the actual test commences
...up to now, we've relied on a regular schedule governed solely
by the progression of node levels, with a fixed level speed
defaulting to 1ms per level.
But in preparation of stress tesging, we need a schedule adapted
to the expected distribution of computation times, otherwise
we'll not be able to factor out the actual computation graph
connectivity. The goal is to establish a distinctive
**breaking point** when the scheduler is unable to cope with
the provided schedule.
The helper developed thus far produces a sequence of
weight factors per level, which could then be multiplied
with an actual delay base time to produce a concrete schedule.
These calculations, while simple, are difficult to understand;
recommended to use the values tabulated in this test together
with a `graphviz` rendering of the node graph (🠲 `printTopologyDOT()`)
The intention is to establish a theoretical limit for the expense,
given some degree of concurrency. In reality, the expense should always
be greater, since the time is not just split by the number of cores;
rather we need to chain up existing jobs of various weight on the available
cores (which is a special case of the box packing problem).
With this formula, an ideal weight factor can be determined for each level,
and then summing up the sequence of levels gives us a guess for a sensible
timing for the overall scheduler
...so IterExplorer got yet another processing layer,
which uses the grouping mechanics developed yesterday,
but is freely configurable through λ-Functions.
At actual usage sit in TestChainLoad, now only the actual
aggregation computation must be supplied, and follow-up computations
can now be chained up easily as further transformation layers.
Yesterday I've written a simple loop-based implementation of
a grouping aggregation to count the node weights per level.
Unfortunately it turns out we'll use several flavours of this
and we'd have to chain up postprocessing -- thus from a usage perspective
it would be better to have the same functionality packaged as interator pipeline.
This turns out to be surprisingly tricky and there is no suitable library
function available, which means I'll have to write one myself.
This changeset is the first step into this direction: reformulate
the simple for-loop into a demand-driven grouping iterator
...the idea is to use the sum of node weights per level
to create a schedule, which more closely reflects the distribution
of actual computation time. Hopefully such a schedule can then be
squeezed or stretched by a time factor to find out a ''breaking point'',
at which the Scheduler is no longer able to keep up.
- now there can not be any direct dispatch anymore when entering events
- thus there is no decision logic at entrance anymore
- rather the work-function implementation moved down into Layer-2
- so add a unit-test like coverage there (integration in SchedulerService_test)
- fix mistake in schdule time for planning chunks (must use start, not end of chunk)
- allow to configure the heuristics for pre-roll (time reserved for planning a node)
...observing multiple failures, which seem to be interconnected
- the test-setup with the planning chunk pre-roll is insufficient
- basically it is not possible to perform further concurrent planning,
without getting behind the actual schedule; at least in the setup
with DUMP print statements (which slowdown everything)
- muliple chained re-entrant calls into the planning function can result
- the **ASSERTION in the Allocator** was triggered again
- the log+stacktrace indicate that there **is still a Gap**
in the logic to protect the allocations via Grooming-Token
...causing the system to freeze due to excess memory allocation.
Fortunately it turned out this was not an error in the Scheduler core
or memory manager, but rather a sloppiness in the test scaffolding.
However, this incident highlights that the memory manager lacks some
sanity checks to prevent outright nonsensical allocation requests.
Moreover it became clear again that the allocation happens ''already before''
entering the Scheduler — and thus the existing sanity check comes too late.
Now I've used the same reasoning also for additional checks in the allocator,
limiting the Epoch increment to 3000 and the total memory allocation to 8GiB
Talking of Gibitbytes...
indeed we could use a shorthand notation for that purpose...