...hoped to keep it simple, but this is inevitable, since we
want to provide a CSV list as value within a list of key=value
bindings, and all packaged into a simple string for easy testing.
Thus the parsing RegExp just needs two branches for simple and quoted vals
...implemented by simply parsing the string into key=value pairs,
which are then stored into a shared map. The actual data binding
implementation can thus be inherited from the existing Map-binding
While they were detected just fine, thy were passed-through
unaltered, which subverts the purpose of such an escape,
which is to allow for the tag syntax to be present in the
processed, substituted document (e.g. when generating a
shell script)
thus `\${escaped}` becomes `${escaped}`
...turns out the ''pipeline design'' is not a good fit for the
Action compilation, since the compiler needs to refer to previous Actions;
better to let the compiler ''build'' the `ActionSeq`
...implemented as »custom processing layer« within a
demand-driven parsing pipeline, with the ability to
inject additional Action-tokens to represent the intermittent
constant text between tags; special handling to expose one
constant postfix after the last active tag.
MatchSeq was imported recently from the Yoshimi-testsuite,
as supporting helper for the CSV table component.
Actually this is just a thin wrapper on top of std::regex_iterator,
which in turn has properties and behaviour very similar to Lumiera's
»Forward Iterator« concept (in fact, it was a source of inspiration to
generalise such a pattern).
So this is an obvious round out and cleanup, as it requires just some
minor additions and adjustments to allow processing a sequence of matches
through a for-loop or some elaborate pipelining setup.
The way I've written this helper template, as a byproduct
it is also possible to maintain the back-refrence to the container
through a smart-ptr. In this case, the iterator-handle also manages
the ownership automatically.
...mostly we want the usual convenient handling pattern for iterators,
but with the proviso actually to perform an access by subscript,
and the ability to re-set to another current index
* establish the feature set to provide
* choose scheme for runtime representation
* break down analysis to individual parsing and execution steps
* conclude which actions to conduct and the necessary data
* derive the abstract binding API required
Conducted an extended investigation regarding text templating
and the library solutions available and still maintained today.
The conclusion is
* there are some mature and widely used solutions available for C++
* all of these are considered a mismatch for the task at hand,
which is to generate Gnuplot scripts for test data visualisation
Points of contention
* all solutions offer a massive feature set, oriented towards web content generation
* all solutions provide their own structured data type or custom property-tree framework
**Decision** 🠲 better to write a minimalistic templating engine from scratch rather
In the Lumiera code base, we use C-String constants as unique error-IDs.
Basically this allows to create new unique error IDs anywhere in the code.
However, definition of such IDs in arbitrary namespaces tends to create
slight confusion and ambiguities, while maintaining the proper use statements
requires some manual work.
Thus I introduce a new **standard scheme**
* Error-IDs for widespread use shall be defined _exclusively_ into `namespace lumiera::error`
* The shorthand-Macro `LERR_()` can now be used to simplify inclusion and referral
* (for local or single-usage errors, a local or even hidden definition is OK)
reduce footprint of lib/util.hpp
(Note: it is not possible to forward-declare std::string here)
define the shorthand "cStr()" in lib/symbol.hpp
reorder relevant includes to ensure std::hash is "hijacked" first
showDecimal -> decimal10 (maximal precision to survive round-trip through decimal representation=
showComplete -> max_decimal10 (enough decimal places to capture each possible distinct floating-point value)
Use these new functions to rewrite the format4csv() helper
...this uncovered one inconsistency: when directly adding values
into one of the embedded data vectors, the inconsistent size
was allowed to persist even when adding / removing lines.
This is in contradiction to the behavior for the CSV dump,
which uses index positions from the front of all vectors uniformely.
Thus changed the behaviour of adding a new row, so that it now
caps all vectors to a common size
also added function to clear the table
verify also that clean-up happens in case of exceptions thrown;
as an aside, add Macro to check for ''any'' exception and match
on something in the message (as opposed to just a Lumiera Exception)
...using the same method for sake of uniformity
Also move the permissions helpers to the file.hpp support functions
and setup a separate unit test for these
Inspired by https://stackoverflow.com/a/58454949
Verified behaviour of fs::create_directory
--> it returns true only if it ''indeed could create'' a new directory
--> it returns false if the directory exists already
--> it throws when some other obstacle shows up
As an aside: the Header include/limits.h could be cleaned up,
and it is used solely from C++ code, thus could be typed, namespaced etc.
Since this is a much more complicated topic,
for now I decided to establish two instances through global variables:
* a sequence seeded with a fixed starting value
* another sequence seeded from a true entropy source
What we actually need however is some kind of execution framework
to define points of random-seeding and to capture seed values for
reproducible tests.
Relying on random numbers for verification and measurements is known to be problematic.
At some point we are bound to control the seed values -- and in the actual
application usage we want to record sequence seeding in the event log.
Some initial thoughts regarding this intricate topic.
* a low-ceremony drop-in replacement for rand() is required
* we want the ability to pick-up and control each and every usage eventually
* however, some usages explicitly require true randomness
* the ability to use separate streams of random-number generation is desirable
Yesterday I decided to include some facilities I have written in 2022
for the Yoshimi-Testsuite. The intention is to use these as-is, and just
to adapt them stylistically to the Lumiera code base.
However — at least some basic documentation in the form of
very basic unit-tests can be considered »acceptance criteria«
Initially the model was that of a single graph starting
with one seed node and joining all chains into a single exit node.
This however is not well suited to simulate realistic calculations,
and thus the ability for injecting additional seeds and to randomly
sever some chains was added -- which overthrows the assumption of
a single exit node at the end, where the final hash can be retrieved.
The topology generation used to pick up all open ends, in order to
join them explicitly into a reserved last node; in the light of the
above changes, this seems like an superfluous complexity, and adds
a lot of redundant checks to the code, since the main body of the
algorithm, in its current form, already does all the necessary
bound checks. It suffices thus to just terminate the processing
when the complete node space is visited and wired.
Unfortunately this requires to fix basically all node hashes
and a lot of the statistics values of the test; yet overall
the generated graphs are much more logical; so this change
is deemed worth the effort.
Allow easily to generate a Chain-Load with all nodes unconnected,
yet each node on a separate level.
Fix a deficiency in the graph generation, which caused spurious
connections to be added at the last node, since the prune rule
was not checked
...the previous setup produced a single linear chain
instead of a set of unconnected nodes.
With this, the behaviour is more like expected,
but concurrency is still too low
- better use a Test-Chain-Load without any dependencies
- schedule all at once
- employ instrumentation
- use the inner »overall time« as dependent result variable
The timing results now show an almost perfect linear dependency.
Also the inner overall time seems to omit the setup and tear-down time.
But other observed values (notably the avgConcurrency) do not line up
- fill the range randomly with probe points
- use the node count as independent parameter
- measurement method *works as intended*
- results indeed show a linear relationship
Results are ''interesting'' however, since the (par,time) points
seem to be arranged into two lines, implying that about half
of the runs were somehow ''degraded'' and performed way slower.
With the latest improvements, the »breaking point search« works as expected
and yields meaningful data; however — it seems to be well suited rather
for specific setups, which involve an extended graph with massive dependencies,
because only such a setup produces a clearly defined ''breaking point.''
Thus I'm considering to complement this research by another measurement setup
to establish a linear regression model of the Scheduler expense.
To allow integration of this different setup into the existing stress-test-rig,
some rearrangements of the builder notation are necessary; especially we need
to pass the type name of the actual tool, and it seems indicated to
reorder the source code to provide the config base class `StressRig`
at the top, followed by a long (and very technical) implementation
namespace.
It turns out to be not correct using all the divergence in concurrency
as a form factor, since it is quite common that not all cores can be active
at every level, given the structural constraints as dictated by the load graph.
On the other hand, if the empirical work (non wait-time) concurrency
systematically differs from the simple model used for establishing the schedule,
then this should indeed be considered a form factor and deduced from
the effective stress factor, since it is not a reserve available for speed-up
The solution entertained here is to derive an effective compounded sum
of weights from the calculation used to build the schedule. This compounded
weight sum is typically lower than the plain sum of all node weights, which
is precisely due to the theoretical amount of expense reduction assumed
in the schedule generation. So this gives us a handle at the theoretically
expected expense and through the plain weight sum, we may draw conclusion
about the effective concurrency expected in this schedule.
Taking only this part as base for the empirical deviations yields search results
very close to stressFactor ~1 -- implying that the test setup now
observes what was intended to observe...
In binary search, in order to establish the invariant initially,
a loop is necessary, since a single step might not be sufficient.
Moreover, the ongoing adjustments jeopardise detection of the
statistical breaking point condition, by causing a negative delta
due to gradually approaching the point of convergence -- leading
to an ongoing search in a region beyond the actual breaking point.
Various misconceptions identified in the feedback path of the test algorithm.
- statistics are cumulative, which must be incorporated by norming on time base
- average concurrency includes idle times, which is besides the point within this
test setup, since additional wait-phases are injected when reducing stress
Relying on the new instrumentation facility, the actually effective
concurrency and cumulative run time of the test jobs can be established.
These can now be cast into a form-factor to represent actual excess expenses
in relation to the theoretical model.
By allowing to adjust the adapted schedule by this form factor,
it can be made to reflect more closely the actual empiric load,
hopefully leading to a more realistic effect of the stress-factor
and thus results better suited to conclude on generic behaviour.
...turns out rather challenging to come up with any test case,
that is both meaningful, simple to setup and understand, yet still
produces somewhat stable values. `IncidenceCount` seems most valuable
for investigation and direct inspection of results
Various experiments to watch Scheduler behaviour under extended load.
Notably the example committed here makes the Scheduler run for 1.2 sec
and process 800 jobs with 10ms each, thereby putting the system into
100% load on all CPUs
- supplement the pre-dimensioning for data capture; without that,
sporadic memory corruption indeed happens (as expected, since
concurrent re-allocation of the vector with an entry for each
thread is not threadsafe, and this test shows much contention)
- add a top-level logging for better diagnostics of errors
emanating from the test run
Basically users are free to place the measurement calls to their liking.
This implies that bracketed measurement intervals can be defined overlapping
even within a single thread, thereby accounting the overlapping time interval
several times. However, for the time spent per thread, only actual thread
activity should be counted, disregarding overlaps. Thus introduce a
new aggregate, ''active time'', which is the sum of all thread times.
As an aside, do not need explicit randomness for the simple two-thread
test case — timings are random anyway...
+ bugfix for out-of-bounds access
...since we've established already an integration over the event timeline,
it is just one simple further step to determine the concurrency level
on each individual segment of the timeline. Based on this attribution
- the averaged concurrenty within the observation range can be computed as weighted mean
- moreover we can account for the precise cumulated time spent at each concurrency level