So this finally solves the fundamental problem regarding a race on
initialisation of the thread-wrapper; it does *not* solve the same problem
for classes deriving from thread-wrapper, which renders this design questionable
altogether -- but this is another story.
In the end, this initialisation-race is rooted in the very nature of starting a thread;
it seems there are the two design alternatives:
- expose the thread-creation directly to user code (offloading the responsibility)
- offer building blocks which are inherently dangerous
this is a mere rearrangement of code (+lots of comments),
but helps to structure the overall construction better.
ThreadWrapper::launchThread() now does the actual work to build
the active std::thread object and assign it to the thread handle,
while buildLauncher is defined in the context of the constructors
and deals with wiring the functors and decaying/copying of arguments.
If we package all arguments together into a single tuple,
even including the member-function reference and the this-ptr
for the invokeThreadFunction(), which is the actual thread-functor,
then we can rely on std::make_from_tuple<T>(tuple), which implements
precisely the same hand-over via a std::index_sequence, as used by the
explicitly coded solution -- getting rid of some highly technical boilerplate
Concept study of the intended solution successful.
Can now transparently embed any conceivable functor
and an arbitrary argument sequence into a launcher-λ
Materialising into a std::tuple<decay_t<TYPES...>> did the trick.
Considering a solution to shift the actual launch of the new thread
from the initialiser list into the ctor body, to circumvent the possible
"undefined behaviour". This would also be prerequisite for defining
a self-managed variant of the thread-wrapper.
Alternative / Plan.B would be to abandon the idea of a self-contained
"thread" building block, instead relying on precise setup in the usage
context -- however, not willing to yield yet, since that would be exactly
what I wanted to avoid: having technicalities of thread start, argument
handover and failure detection intermingled with the business code.
On a close look, the wrapper design as pursued here
turns out to be prone to insidious data race problems.
This was true also for the existing solution, but becomes
more clear due to the precise definitions from the C++ standard.
This is a confusing situation, because these races typically do not
materialise in practice; due to the latency of the OS scheduler the
new thread starts invoking user code at least 100µs after the Wrapper
object is fully constructed (typically more like 500µs, which is a lot)
The standard case (lib::Thread) in its current form is correct, but borderline
to undefined behaviour, and any initialisation of members in a derived class
would be off limits (the thread-wrapper should not be used as baseclass,
rather as member)
...while reworking the application code, it became clear that
actually there are two further quite distinct variants of usage.
And while these could be implemented with some trickery based on
the Thread-wrapper defined thus far, it seems prudent better to
establish a safely confined explicit setup for these cases:
- a fire-and-forget-thread, which manages its own memory autonomously
- a thread with explicit lifecycle, with detectable not-running state
...hitting a roadblock however:
The existing Threads in the Lumiera application were effectively "detached"
But the C++14 framework requires us to keep the Thread-object alive
FamilyMember::allocateNextMember() was actually a post-increment,
so (different than with TypedCounter) here no correction is necessary
As an asside, WorkForce_test is sometimes unstable immediately after a build.
Seemingly a headstart of 50µs is not enough to compensate for scheduler leeway
The existing TypedCounter_test was excessively clever and convoluted,
yet failed to test the critical elements systematically. Indeed, two
bugs were hidden in synchronisation and instance access.
- build a new concurrent test from scratch, now using the threadBenchmark
function for the actual concurrent execution and just invoked a
random selected access to the counter repeatedly from a large number
of threads.
- rework the TypedContext and counter to use Atomics where applicable;
measurements indicate however that this has only negligible impact
on the amortised invocation times, which are around 60ns for single-threaded
access, yet can increase by factor 100 due to contention.
...these were already written envisionaging he new API,
so it's more or less a drop-in replacement.
- cant use vector anymore, since thread objects are move-only
- use ScopedCollection instead, which also has the benefit of
allocating the requires space up-front. Allow to deduce the
type parameter of the placed elements
... which became apparent after switching to the new Thread-wrapper implementation
... the reason is a bug in the Thread-Monitor (which will also be reworked soon)
While seemingly subtle, this is a ''deep change.''
Up to now, the project attempted to maintain two mutually disjoint
systems of error reporting: C-style error flags and C++ exceptions.
Most notably, an attempt was made to keep both error states synced.
During the recent integration efforts, this increasingly turned out
as an obstacle and source for insidious problems (like deadlocks).
As a resolve, hereby the relation of both systems is **clarified**:
* C-style error flags shall only be set and used by C code henceforth
* C++ exceptions can (optionally) be thrown by retrieving the C-style error code
* but the opposite is now ''discontinued'' : Exceptions ''do not set'' the error flag anymore
- the deadlock was caused by leaking error state through the C-style lumiera_error
- but the reason for the deadlock lies in the »convenience shortcut«
in the Object-Monitor scope guard for entering a wait state immediately.
This function undermines the unlocking-guarantee, when an exception
emanates from within the wait() function itself.
...this function was also ported to the new wrapper,
and can be verified now in a much more succinct way.
''This completes porting of the thread-wrapper''
Since the decision was taken to retain support for this special feature,
and even extend it to allow passing values, the additional functionality
should be documented in the test. Doing so also highlighted subtle problems
with argument binding.
A subtle yet important point: arguments will always be copied into the new thread.
This is a (very sensible) limitation introduced by the C++ standard.
To support seamless use, the thread-wrapper now rewrites the argument types
picked up from the invocation, to prevent passing on a reference type,
which typically ensues when invoking with a variable name. Otherwise
confusing error messages would be emitted from deep within the STD library.
As a further consequence, function signatures involving reference arguments
can no longer be bound (which is desirable; a function to be performed
within a separate thread must either rely on value arguments, or deliberately
use std::ref wrappers to pass references, assuming you know what you're doing)
- it is not directly possible to provide a variadic join(args...),
due to overload resolution ambiguities
- as a remedy, simplify the invocation of stringify() for the typical cases,
and provide some frequently used shortcuts
A common usage pattern is to derive from lib::Thread
and then implement the actual thread function as a member function
of this special-Thread-object (possibly also involving other data members)
Provide a simplified invocation for this special case,
also generating the thread-id automatically from the arguments
after all this groundwork, implementing the invocation,
capturing and hand-over of results is simple, and the
thread-wrapper classes became fairly understandable.
This relieves the Thread policy from a lot of technicalities,
while also creating a generally useful tool: the ability to invoke
/anything callable/ (thanks to std::invoke) in a fail-safe way and
transform the exception into an Either type
on second thought, the ability to transport an exception still seems
worthwhile, and can be achieved by some rearrangements in the design.
As preparation, reorganise the design of the Either-wrapper (lib::Result)
VERIFY_ERROR allows to check that an expected except is actually thrown.
The implementation was lazy however;
it just investigated the C-style error flag instead of *really* verifying
that an *lumiera::Exception* with the expected flag was caught.
This discrepancy can be a problem when there is a stray error flag set,
or for some reason the error flag gets cleared before the exception
reaches the top-level catch-block in the test.
- relocate some code into a dedicated translation unit to reduce #includes
- actually set the thread-ID (the old implementation had only a TODO at that point)
While it would be straight forward from an implementation POV
to just expose both variants on the API (as the C++ standard does),
it seems prudent to enforce the distinction, and to highlight the
auto-detaching behaviour as the preferred standard case.
Creating worker threads just for one computation and joining the results
seemed like a good idea 30 years ago; today we prefer Futures or asynchronous
messaging to achieve similar results in a robust and performant way.
ThreadJoinable can come in handy however for writing unit tests, were
the controlling master thread has to wait prior to perform verification.
So the old design seems well advised in this respect and will be retained
- cut the ties to the old POSIX-based custom threadpool framework
- remove operations deemed no longer necessary
- sync() obsoleted by the new SyncBarrier
- support anything std::invoke supports
...which is the technique used in the existing Threadpool framwork.
As expected, such a solution is significantly slower than the new
atomics-based implementation. Yet how much slower is still striking.
Timing measurements in concurrent usage situation.
Observed delay is in the order of magnitude of known scheduling leeway;
assuming thus no relevant overhead related to implementation technique
Over time, a collection of microbenchmark helper functions was
extracted from occasional use -- including a variant to perform
parallelised microbenchmarks. While not used beyond sporadic experiments yet,
this framework seems a perfect fit for measuring the SyncBarrier performance.
There is only one catch:
- it uses the old Threadpool + POSIX thread support
- these require the Threadpool service to be started...
- which in turn prohibits using them for libary tests
And last but not least: this setup already requires a barrier.
==> switch the existing microbenchmark setup to c++17 threads preliminarily
(until the thread-wrapper has been reworked).
==> also introduce the new SyncBarrier here immediately
==> use this as a validation test of the setup + SyncBarrier
Using the same building blocks, this operation can be generalised even more,
leading to a much cleaner implementation (also with better type deduction).
The feature actually used here, namely summing up all values,
can then be provided as a convenience shortcut, filling in std::plus
as a default reduction operator.
...first used as part of the test harness;
seemingly this is a generic and generally useful shortcut,
similar to algorithm::reduce (or some kind of fold-left operation)
Intended as replacement for the Mutex/ConditionVar based barrier
built into the exiting Lumiera thread handling framework and used
to ensure safe hand-over of a bound functor into the starting new
thread. The standard requires a comparable guarantee for the C++17
concurrency framework, expressed as a "synchronizes_with" assertion
along the lines of the Atomics framework.
While in most cases dedicated synchronisation is thus not required
anymore when swtiching to C++17, some special extended use cases
remain to be addressed, where the complete initialisation of
further support framework must be ensured.
With C++20 this would be easy to achieve with a std::latch, so we
need a simple workaround for the time being. After consideration of
the typical use case, I am aiming at a middle ground in terms of
performance, by using a yield-wait until satisfying the latch condition.
The investigation for #1279 leads to the following conclusions
- the features and the design of our custom thread-wrapper
almost entirely matches the design chosen meanwhile by the C++ committee
- the implementation provided by the standard library however uses
modern techniques (especially Atomics) and is more precisely worked out
than our custom implementation was.
- we do not need an *active* threadpool with work-assignment,
rather we'll use *active* workers and a *passive* pool,
which was easy to implement based on C++17 features
==> decision to drop our POSIX based custom implementation
and to retrofit the Thread-wrapper as a drop-in replacement
+++ start this refactoring by moving code into the Library
+++ create a copy of the Threadwrapper-code to build and test
the refactorings while the application itself still uses
existing code, until the transition is complete
...which however brings the problem that we can no longer block the destructor
of WorkForce by simply joining on all joinable threads (there is a race
between testing joinable() and invoking join(), which does not tolerate
non-joinable state.
There is a second problem: we need to detect and clean-up terminated workers,
even for just finding out how many workers are still active. Fortunately
doing so also solves the waiting problem in the destructor
While in principle it would be possible (and desirable)
to control worker behaviour exclusively through the Work-Functor's return code,
in practice we must concede that Exceptions can always happen from situations
beyond our control. And while it is necessary for the WorkForce-dtor to
join and block (we can not just pull away the resources from running threads),
the same destructor (when called out of order) must somehow be able
at least to ask the running threads to terminate.
Especially for unit tests this becomes an obnoxious problem -- otherwise
each test failure would cause the test runner to hang.
Thus adding an emergency halt, and also improve setup for tests
with a convenience function to inject a work-function-λ
- investigate consistency guarantees through acquire-release
==> turns out we do not need a fence, but it is tantamount
to have a guard variable and actually load and check
the value to ensure we indeed get a happens-before
- elaborate design of the WorkForce
+ no shared control variables necessary
+ no ability to forcibly shut-down the WorkForce
+ rather, all control will be exerted through the return value
of the Work-Functor
Up to now, the DiagnosticFun mock in ActivityDetector only
created an EventLog entry on invocation and was able to retunr
a canned result value. Yet for the job invocation scenario test,
it would be desirable to hook-in a λ with a fake implementation
into the ExecutionContext. As a further convenience, the
return value is now default initialised, instead of being
marked as uninitialised until invocation of "returning(val)"
...regarding the kind of activity (the verb),
and also for some special case access of payload data;
deliberately asserting the correct verb, but no mandatory check,
since this whole Activity-Language is conceived as cohesive
and essentially sealed (not meant to be extended)
It is not sufficient just to pass this "current time" as parameter
into the ActivityLang::dispatchChain(), since some Activities within
this chain will essentially be long-running (think rendering); thus
we need a real callback from within the chain. The obvious solution
is to make this part of the Execution Context, which is an abstraction
of the scheduler environment anyway
...turns out there is still a lot of leeway in the possible implementation,
and seemingly it is too early to decide which case to consider the default.
Thus I'll proceed with the drafted preliminary solution...
- on primary-chain, an inhibited Gate dispatches itself into future for re-check
- on Notification, activation happens if and only if this very notification opens the Gate
- provide a specifically wired requireDirectActivation() to allow enforcing a minimal start time
While the ''general direction'' seems clear, some in-depth
analysis was required to find out what information can reasonably
be expected to be available at this point.
The decision was made to shift the actual deadline calculation
into the Job-Planning altogether, assuming that a preliminary solution
based on data implicitly available there will be enough to implement
simple linear playback, while precise management of job start times
can be added in later, when observation of actual timing behaviour
is available...
Solved by special treatment of a notification, which happens
to decrement the latch to zero: in this case, the chain is
dispatched, but also the Gate is locked permanently to block
any further activations scheduled or forwareded otherwise
TODO: while correct as implemented, the handling of the
notification seems questionable, since re-scheduling the chain immediately
may lead to multiple invocations of the chain, since it might have been "spinned"
and thus re-scheduled already, and we have no way to find out about that
...can not take a shortcut here, since the timing information
embedded into the POST-Activity must somehow be transported
to the Scheduler; key point to note is that the chain will
be performed in »management mode« (single threaded)
...attempt to get this intricate state machine sorted out
Notification turned out quite tricky, since it may emanate
from a concurrently executed phase and we try to avoid having
to protect the gate directly with a lock; rather we re-dispatch
the notification through the queue, which indirectly also ensures
that the worker de-queuing the NOTIFY-Activity operates in
management mode (single threaded, holding the GroomingToken)
Each Epoch in the memory manager holds a Gate in the first slot;
after the logic for Gate-activation is worked out now, we can switch
to using this actual logic to determine when an Epoch can be released
Decision how to handle a failed Gate-check
- spin forward (re-scheduler) by some time amount
- this spin-offset parameter is retrieved from the Execution Context
- thus it will be some kind of engine parameter
With these determinations and the framework for the Execution Context
it is now possible to code up the logic for Gate check, which in turn
can then be verified by the watchGate diagnostics
due to technical limitations this requires to wire the adaptor
as replacement for the subject Activity, so that it can capture
and log the activation, and then pass it on to its watched subject
...turns out that util::toString does not explicitly handle pointers differently,
for very good reasons; this function must always work, always produce a simple and
compact representation, and it must be possible to instantiate the template
and take a function reference (which precludes adding an overload for pointers)
requires to supplement EventLog matching primitives
to pick and verify a specific positional argument.
Moreover, it is more or less arbitrary which job invocation parameters
are unpacked and exposed for verification; we'll have to see what is
actually required for writing tests...
doing so would contradict the fundamental architecture,
all kinds of failures and timeouts need to be handled within
Scheduler-Layer-2 rather.
Jobs are never aborted, nor do they need to know if and when they are invoked
Testcase (detect function invocation) passes now as expected
Some Library / Framework changes
- rename event-log-test.cpp
- allow the ExpectString also to work with concatenated expectation strings
Remark: there was a warning in the comment in event-log.hpp,
pointing out that negative assertions are shallow.
However, after the rework in 9/2018 (commit: d923138d1)
...this should no longer be true, since we perform proper backtracking,
leading to an exhaustive search.
ActivityMatch inherits privately from the EventMatch object,
and is thus able to delegate relevant matching queries, but
also to provide high-level special matchers.
This new design resolves the ambiguity regarding function arguments.
Moreover, we can now record the current sequence-Number as *attribute*
in the respective log record (this is the benefit of using structured
log entries instead of just a textual log), thereby avoiding the various
pitfalls with explicit bracketing sequence-number log entries
bottom line: this reworked design seems to be a better fit,
even while technically the implementation with the wrapped matcher
is somewhat ugly...
The EventLog seems to provide all the building blocks, but we need
some higher level special matchers (and maybe we also want to hide
some of the basic EventLog matchers). A soulution might be to wrap
the EventMatcher and delegate all follow-up builder calls.
This seems adequate, since the EventLog-Matcher is basically used as black box,
building up more elaborate matchers from the provided basic matchers...
Spent some time again to understand how EventLog matching works.
My feelings towards this piece of code are always the same: it is
somewhat too "tricky", but I am not aware of any other technique
to get this degree of elaborate chained matching on structured records,
short of building a dedicated matching engine from scratch.
The other alternative would be to use a flat textual log (instead of
the structured log records from EventLog), but then we'd have to
generate quite intricate regular expressions from the builder,
and I'm really doubtful it would be easier and clearer....
...turns out this is entirely generic and not tied to the context
within ActivityDetector, where it was first introduced to build a
mock functor to log all invocations.
Basically this meta-function generates a new instantiation of the
template X, using the variadic argument pack from template U<ARGS...>
...for coverage of the Activity-Language,
various invocations of unspecific functions must be verified,
with the additional twist that the implementation avoids indirections
and is thus hard to rig for tests.
Solution-Idea: provide a λ-mock to log any invocation into the
Event-Log helper, which was created some years ago to trace GUI communication...
essentially define a concept how to ''perform'' render activities in the Scheduler.
This entails to specify the operation patterns for the four known base cases
and to establish a setup for the implementation.
Further extensive testing with parameter variations,
using the test setup in `BlockFlow_test::storageFlow()`
- Tweaks to improve convergence under extreme overload;
sudden load peaks are now accomodated typically < 5 sec
- Make the test definition parametric, to simplify variations
- Extract the generic microbenchmark helper function
- Documentation
There seems to be a ''sweet spot'' for somewhat larger Epoch sizes around 500 slots.
At least in the test setup used here, which works with a load of 200 Frames / sec,
which is significantly over the typical value of 50fps (video + audio) for simple playback.
The optimisation of averaged allocation times can not be much improved **below 30ns**.
Overall, this can be considered a good result,
since this allocation scheme does way more than just allocate memory,
it also provides a means to track dependencies and lifecycle.
__For context__:
- we should strive at processing one frame in ~ 10ms
- for 10 Activity records per Frame, we currently use < 0.5 µs for
memory and dependency management in the scheduler
- this leaves enough room for the further administrative efforts
(priority queue, job planning, buffer management)
... while this a comparison of apples and oranges, since the standard
heap allocator does not offer any dependency and lifecycle managmenet,
while the BlockFlow scheme developed here is much more complex and
offers a lifetime and dependency control specifically tailored to
the needs of the Scheduler.
Anyway, with the latest tweaks and refactorings, the test case
now shows averaged times per allocation on a comparable level
(both in the range of ~30ns)
BUT -> +50% runtime in -O3 (+20ns)
Investigation seems to indicate
- that the increased (+1 Epochs, 10 -> 11) moving average
caused the Algo to perform worse (strong effect)
- that the Optimiser has problems with boost::rational, which however
yields only a minute effect (+5ns), and only on the critical path
The access via Meyers Singleton has no adverse effect,
rather the new setup gives a tiny benefit (46ns -> 37ns).
Surprisingly, the increased pre-allocation has no observable effect.
On the long run, there will be a central Render Engine parametrisation;
some parameters can even be expected to be dynamic; thus prepare the
BlockFlow allocator to fit in with this expectation
For comparison: use individual managment by refcount.
This supports the conclusion that BlockFlow is more than just a
custom allocator; it also supports a non-trivial lifetime management,
and this comes at a cost.
Playing around with various load patterns uncovers further weak spots
in the regulation mechanism. As a remedy, introduce a stronger feed-back
and especially set the target load factor from 100% -> 90%
to add some headroom to absorb intermittent load peaks
Presumably ''much more observation and fine-tuning'' will be necessary
under real-world load conditions (⟹ Ticket #1318 for later)
- BUG: must prevent the Epoch size to become excessive low
- Problem: feedback signal should not be overly aggressive
Fine-Tuning:
- Dose for Overflow-compensation is delicate
- Moving average and Overflow should be balanced
- ideally the compensatory actions should be one order of magnitude
slower than the characteristic regulation time
Improvement: perform Moving-Average calculations in doubles
..as a heuristic to regulate optimal Epoch duration;
when Epochs are discarded, the effective fill factor can be used
to guess an Epoch duration time, which would (in hindsight)
lead to perfect usage of storage space
..using a simplistic implementation for now: scale down the
Epoch-stepping by 0.9 to increase capacity accordingly.
This is done on each separate overflow event, and will be
counterbalanced by the observation of Epoch fill ratio
performed later on clean-up of completed Epochs
further implementation makes clear that the AllocationHandle,
which is the primary usage front-end, has to rely both on
services of the underlying ExtentFamily allocator, as well
as on the BlockFlow itself for managing the Epoch spacing.
- fix a bug in IterExplorer: when iterating a »state core« directly,
the helper CoreYield passed the detected type through ValueTypeBindings.
This is logically wrong, because we never want to pick up some typedefs,
rather we always want to use the type directly returned from CORE::yield()
Here the iterator returns an Epoch&, which itself is again iterable
(it inherits from std::array<Activity, N>). However, it is clear
that we must not descent into such a "flatMap" style recursive expansion
- draft a simple scheme how to regulate Epoch lengths dynamically
- add diagnostics to pinpoint a given Activity and find out into which
Epoch it has been allocated; used to cover the allocator behaviour
- add preliminary deadline-check (directly instead of using the Activity)
- with this shortcut, now able to implement discarding obsoleted Epochs
- Iteration and use of the underlying `ExtentFamily` is also settled by now
💡 ''Implementation concept for the allocation scheme complete and validated''
...with the preceding IterableDecorator refactoring,
the navigation and access to the storage extents can now be
organised into a clear progression
Allocator::iterator -> EpochIter -> Epoch&
Convenience management and support functions can then be
pushed down into Epoch, while iteration control can be done
high-level in BlockFlow, based on the helpers in Epoch
Especially for the BlockFlow allocator, sanity checks are elided
for performance reasons; yet, generally speaking, it can be a very bad idea
to "optimise" away sanity checks. Thus an additional adaptor is provided
to layer such checks on top of an existing core; and IterEplorer now
always wires in this additional adaptor, and so the original behaviour
is now restored in this respect (and for the largest part of the code base)
While at first sight just a superficial variation of the existing IterStateWrapper,
it became clear with the evolution of the IterExplorer framework that
this setup represents a distinct concept, and especially lends itself
for complex and cohesive collaboration in a layered pipeline. Which
may, or may not be a good idea, depending on the circumstances.
Now, for the implementation of the scheduler memory allocation scheme,
another twist is added to the picture: we can not effort the sanity checks
on each access, even more so when layering / adapting iterators, where
it is essential that the optimiser can remove all unnecessary warts.
..this is the most simple case, where no Epochs are opened yet
..add diagnostics to inspect alloc count and deadlines
..add accessors for the first/last underlying Extent
...continue to proceed test-driven
...scheduler internals turn out to be intricate and cohesive,
and thus the only hope is to adhere to strict testing discipline
Library: add "obvious" utility to the IterExplorer, allowing to
materialise all contents of the Pipeline into a container
...use this to take a snapshot of all currently active Extent addresses
- use a checksum to prove that ctor / dtor of "content" is not invoked
- let the usage of active extents "wrap around" so that the mem block is re-used
- verify that the same data is still there
The low-level allocator is basically implemented now,
but we still need to check thoroughly that the tricky
wrap-around and expansion logic behaves sane...
(see #1311)
Using a Storage* within a wrapper as "pos" will work,
but is borderline trickery, since it amounts to subverting
the idea behind IterAdapter (which is to encapsulate a target
pointer with some control-logic in the managing container).
Using the same storage size and implementation overhead,
it is much more straight-forward to package the complete
iteration logic into a »State Core«, which in this case
however maintains a back-link to the ExtentFamily.
Iteration should just yield an Reference to an Extent,
thereby hiding all details of the actual raw storage (char[]).
This can be achieved by usind a wrapper type around a pointer
into the managing vector; from this pointer we may convert
into a vector::iterator with the trick described here
https://stackoverflow.com/a/37101607/444796
Furthermore, continued planning of the Activity-Language,
basically clarified the complete usage scenario for now;
seems all implementable right away without further difficulties
..with the ability to grow on demand..
..possibly add the new extents in the middle, by first allocating at the end
and then using the std::rotate() algo to bring them to the point
in the middle where new extents are required
- the idea is to use slot-0 in each extent for administrative metadata
- to that end, a specialised GATE-Activity is placed into slot-0
- decision to use the next-pointer for managing the next free slot
- thus we need the help of the underlying ExtentFamily for navigating Extents
Decision to refrain from any attempt to "fix" excessive memory usage,
caused by Epochs still blocked by pending IO operations. Rather, we
assume the engine uses sane parametrisation (possibly with dynamic adjustment)
Yet still there will be some safety limit, but when exceeding this limit,
the allocator will just throw, thereby killing the playback/render process
- decision to favour small memory footprint
- rather use several Activity records to express invocation
- design Activity record as »POD with constructor«
- conceptually, Activity is polymorphic, but on implementation
level, this is "folded down" into union-based data storage,
layering accessor functions on top
- decision how to handle the Extent storage (by forced-cast)
- decision to place the administrative record directly into the Extent
TODO not clear yet how to handle the implicit limitation for future deadlines
using a simple yet performant data structure.
Not clear yet if this approach is sustainable
- assuming that no value initialisation happens for POD payload
- performance trade-off growth when in wrapped-state vs using a list
....still about to find out what kinds of Activities there are,
and what reasonably to implement on layer-2 vs. layer-1
It is clear that the worker will typically invoke a doWork()
operation on layer-2, which in turn will iterate layer-1.
Each worker pulls and performs internal managmenet tasks exclusively
until encountering the next real render task, at which point it will
drop an exclusion flag and then engage into performing the actual
extended work for rendering...
- define a simple record to represent the Activity
- define a handle with an ordering function
- low-level functions to...
+ accept such a handle
+ pick it from the entrace queue
+ pass it for priorisation into the PriQueue
+ dequeue the top priority element
The second design from 2017, based on a pipeline builder,
is now renamed `TreeExplorer` ⟼ `IterExplorer` and uses
the memorable entrance point `lib::explore(<seq>)`
✔
after completing the recent clean-up and refactoring work,
the monad based framework for recursive tree expansion
can be abandoned and retracted.
This approach from functional programming leads to code,
which is ''cool to write'' yet ''hard to understand.''
A second design attempt was based on the pipeline and decorator pattern
and integrates the monadic expansion as a special case, used here to
discover the prerequisites for a render job. This turned out to be
more effective and prolific and became standard for several exploring
and backtracking algorithms in Lumiera.
- allow to configure the expected job runtime in the test spec
- remove link to EngineConfig and hard-wire the engine latency for now
... extended integration testing reveals two further bugs ;-)
... document deadline calculation
This finishes the last series of refactorings; the basic concept
remains the same, but in the initial version we arranged the expander
function in the pipeline to maintain a Tuple (parent, child) for the
JobTickets. Unfortunately this turned out to be insufficient, since
JobTicket is effectively const and responsible for a complete Sement,
so there is no room to memorise a Deadline for the parent dependency.
This leads to the better idea to link the JobPlanning aggregators
themselves by parent-child references, which is possible since the
whole dependency chain actually sits in the stack embedded into the
Expander (in the pipeline)
This very deep change (which requires almost complete rebuild)
was prompted by the need to process an object (JobPlanning),
which holds several references and is thus move-only, in the
middle of a complex processing pipeline with child expansion.
If this works out well, a long-standing and obnoxious problem
with transforming iterators would be solved, albeit by incurring
a (presumably small) performance overhead, since now the new
value is no longer *assigned*, but rather the existing payload
is destroyed and a new instance is copy/move constructed into
the inline buffer.
The primary purpose (and widely used in Lumieara) is to have a
Lambda create a new Object, which is then returned by value
and thus immediately moved into this inline buffer, where it
resides for further use (as long as the enclosing pipeline
stays alive). Unless such an object does very elaborate
allocations and registrations behind the scene, the
expense of assigning vs creating should be the same.
...in the hope that the Optimiser is able to elide those references entirely,
when (as is here the case) they point into another field of a larger object compound
...as a preparation for solving a logical problem with the Planning-Pipeline;
it can not quite work as intended just by passing down the pair of
current ticket and dependent ticket, since we have to calculate a chained
calculation of job deadlines, leading up to the root ticket for a frame.
My solution idea is to create the JobPlanning earlier in the pipeline,
already *before* the expansion of prerequisites, and rather to integrate
the representation of the dependency relation direcly into JobPlanning
...using hard coded values instead of observation of actual runtimes,
but at least the calculation scheme (now relocated from TimeAnchor to JobPlanning)
should be a reasonable starting point.
TODO: test fails...
- collect list of entities to be picked up from the dispatcher-pipeline
- as it turns out: there is no sensible use for the realTimeDeadline in
in the FrameCoord record ==> remove it
The initial implementation effort for Player and Job-Planning
has been reviewed and largely reworked, and some parts are now
obsoleted by the reworked alternative and can be disabled.
The basic idea will be retained though: JobPlanning is a
data aggregator and performs the final step of creating a Job
- had to fix a logical inconsistency in the underlying Expander implementation
in TreeExplorer: the source-pipeline was pulled in advance on expansion,
in order to "consume" the expanded element immediately; now we retain
this element (actually inaccessible) until all of the immediate
children are consumed; thus the (visible) state of the PipeFrameTick
stays at the frame number corresponding to the top-level frame Job,
while possibly expanding a complete tree of flexible prerequisites
This test now gives a nice visualisation of the interconnected states
in the Job-Planning pipeline. This can be quite complex, yet I still think
that this semi-functional approach with a stateful pipeline and expand functors
is the cleanest way to handle this while encapsulating all details
- fix a bug in the MockDispatcher, when duplicating the ExitNodes.
A vector-ctor with curly braces will be interpreted as std::initializer_list
- add visualisation of the contents appearing at the end of the pipeline
*** something still broken here, increments don't happen as expected
`steam/engine/mock-dispatcher.hpp |cpp` now integrates this
''complete mock setup for render jobs and frame dispatching.''
The exising `DummyJob` has been slightly adapted and renamed
to `MockJob` and is tightly integrated with the other mocks.
The implementation of a `MockDispatcher` necessitated to change
the use of `MockJobTicket`. The initial attempts used a complete
mock implementation, but this approach turned out not to be viable.
Instead — based on the ideas developed for the mock setup —
now the prospective real implementation of `JobTicket` is available
and will be used by the mock setup too. Instead of a synthetic spec,
now a setup of recursively connected `ExitNode`(s) is used; the latter
seems to develop into some kind of Facade for the render node network.
Based on this mock setup, we can now demonstrate the (mostly) complete
Job-Planning pipeline, starting from a segmentation up to render jobs,
and verify proper connectivity and job invocation.
✔
- has to be prepared / supported by the RenderEnvironmentClosure
- actual translation happens when building the Dispatcher-Pipeline
- implementation delegate through
virtual size_t Dispatcher::resolveModelPort (ModelPort)
...ouch this was insidious: the STL implementation for list does not
return a pointer to the element just allocated, but rather retrieves
and dereferences the back() / front() iterator after returning from emplace_back|front()
...which in case of re-entrant allocations is something wildly different
than the initial allocation. Thus a *cheap* and dirty placeholder implementation
just using a STL container is not possible, and we need at least
to code up likewise cheesy placeholder implementation by hand.
- separate allocation and ctor all
- use an inline buffer in the STL container
- explicitly handle ctor failures to discard allocation
- NOT THREADSAFE and likely WASTFUL in terms of performance
==> MockSupport_test now back to GREEN after complete refactoring
The existing implementation of the Player from 2012~2015 inclduded
an additional differentiation by media channel (for multichannel media)
and would build a separate CalcStream for each channel.
The in-depth analysis conducted for the ongoing »Vertical Slice« effort
revealed that this differentiation is besides the point and would never
be materialised: Since -- by definition -- all media processing has
to be done by the engine, also the generation of the final output format
including any channel multiplexing will happen in render nodes.
The only exception would be when only a single channel of multichannel
media is extracted -- yet this case would then translate into a
dedicated ModelPort.
Based on this reasoning, a lot of complexity (and some contradictions)
within the JobTicket implementation can be removed -- together with
some further leftovers of the fist attempt to build JobTickets always
from a Mock specification (we now use construction by the Segment,
based on an ExitNode, which is the expected actual implementation
for production setup)
...by defining a new scheme for access to custom allocators
...and then passing a reference to such an accessor into the
JobTicket ctor, thereby allowing the ticket istelf recursively
to place further JobTicket instances into the allocation space
--> success, test passes (finally)
Up to now, a draft/mock implementation was used, relying on a »spec tuple«,
which was fabricated by MockJobTicket. But with the introduction of
NodeGraphAttachment, the MockSequence now generates a nested ExitNode structure,
and thus the JobTicket will be created through the "real" ctor, and
no longer via MockJobTicket.
Thus it is possible to skip this whole interspersed »spec tuple«,
since ExitNode *is* already this aggregated / abstracted Spec
PROBLEM: can not implement Spec-generation, since
- we must use a λ for internal allocation of JobTickets
- but recursive type inference is not possible
Will thus need to abandon the Spec-Tuple and relocate this
traversal-and-generation code into JobTicket itself
Use another unit-test (FixtureSegment_test) to guide and cover
the transition from the existing fake-implementation to the
actual implementation, where the JobTicket will be generated
on-demand, from a NodeGraphAttachment
It turns out that the real (not mocked) implementation of JobTicket creation
is already required now for this planned (mock)Dispatcher setup;
moreover, this real implementation turns out to be almost identical
to the mock implementation written recently -- just nested structure
of prerequiste JobTickets need to be changed into a similar structur
of ExitNodes
-- as an aside: rearrange various tests to be more in-line
with the envisioned architecture of playback and engine
...this opens up yet another difficult question and a host of new problems
- how are prerequisites detected or arranged by the Builder
- how are prerequisites represented?
- what is an ExitNode in terms of implementation? A subclass of ProcNode?
- how will the actual implementation of JobTicket creation (on-demand) work?
- how to adapt the Mock implementation, while retaining the Specification
for Segments and prerequisites?
..this is now the third attempt, and it seems this one leads to a
clean solution for the type rebinding problem, while also allowing
to unit-test each step in isolation.
The idea is to layer a *templated* builder class on top,
but to slice it away in each step, re-assemble the pipeline
and decorate a new builder instance on top. The net result
is a tightly interconnected processing pipeline without
any spurious interspersed leftovers from the builder,
while all intermediate steps are fully operational
and can thus be unit-tested...
...still very rough edged...
...based on the idea to have a pair(Dependent,Dependency) and to shift these on each level of expansion
PROBLEMS:
- what to use as root level?
- can not handle JobTicket const& in transform-iterator (assignement operator)
...it turns out that we actually do not need to wrap TreeExplorer
on the builder types, because basically there is only a single active
builder type, and the complete processing pipeline can be assembled
in a single terminal function.
The type rebinding problem can thus be solved just by a simple
marker struct, which inherits from a template parameter
...hard to tackle...
The idea is to wrap the TreeExplorer builder, so that our specific
builder functions can delegated to the (inherited) generic builder functions
and would just need to supply some cleverly bound lambdas. However,
resulting types are recursive, which does not play nice with type inference,
and working around that problem leads to capturing a self reference,
which at time of invocation is already invalidated (due to moving the
whole pipeline into the final storage)
...which leads to the next daunting problems:
- we need some mocked ModelPort and DataSink placeholders
- we need a way how to inherit from a partial TreeExplorer pipeline
...introduced in preparation for building the Dispatcher pipeline,
which at its core means to iterate over a sequence of frame positions;
thus we need a way to stop rendering at a predetermined point...
several years ago, it seemed like a good idea to incorporate
the link between nominal time and wall-clock time into a dedicated
anchor point, which also regulates the continued frame planning.
But it turned out that such a design mixes up several concepts
and introduces confusion regarding the meaning of "real time"
- latency can not be reasonably defined for a whole planning chunk
- skipping or sliding due to missed deadlines can not reasonably handled
within such an abstract entity; it must be handled rather at the
level of a playback process
- linking the frame grid generation directly to a planning chunk
undercuts the possible abstraction of a planning pipeline
...which is build a »Job planning pipeline« step by step
in a test setup, and then factor that out as RenderDrive,
to supersede the existing CalcPlanContinuation and get
rid of the Monads this way...
Challenges
- there is a inconsistency with channel usage
- need to establish a way how to transport the output-Sink into the JobFunctor
- need a way to propagate the current frame number to the next planning chunk
The prototypical setup of data structures and test support components
is largely complete by now — with the exception of the `MockDispatcher`,
which will be completed while moving to the next steps pertaining the
setup of a frame dispatch pipeline.
* the existing `DummyJob` was augmented to allow verification of
association between Job and `JobTicket`
* the existing implementation of `JobTicket` was verified and augmented
to allow coverage of the whole usage cycle
* a `MockJobTicket` was implemented on top, which can be generated
from a symbolical test specification (rather than from the real
Fixture data structure)
* a complete `MockSegmentation` was developed, allowing to establish
all the aforementioned data structures without an actual backing
Render Engine. Moreover, `MockSegmentation` can be generated
from the aforementioned symbolic test specification.
* as part of this work, an algorithm to split an existing Segmentation
and to splice in new segments was developed and verified
Last testcase: add deeply nested Prerequisites.
Turns out that the allocator must be able to handle
re-entrant allocations, which std::deque can not fulfil.
Thus using std::list here for the Mock implementation.
In the end, the real allocations will be done by our custom
allocator (AllocationCluster), which can be arranged easily
to support re-entrant allocation calls (since the whole point
is to just place those objects into a pre-allocated large block
and only de-allocate them later in one sway. Thus the allocator
does not need to wait for the object constructor to finish, which
trivially allows for re-entrant calls)
...which uncovers further deeply nested problems,
especially when referring to non-copyable types.
Thus need to construct a common type that can be used
both to refer to the source elements and the expanded elements,
and use this common type as result type and also attempt to
produce better diagnostic messages on type mismatch....
...the improved const correctness on STL iterators uncovered another
latent problem with out diagnositc format helper, which provide
consistently rounded float and double output, but failed to take
CV-qualifiaction into account
This is a subtle and far reaching fix, which hopefully removes
a roadblock regarding a Dispatcher pipeline: Our type rebinding
template used to pick up nested type definitions, especially
'value_type' and 'reference' from iterators and containers,
took an overly simplistic approach, which was then fixed
at various places driven by individual problems.
Now:
- value_type is conceptually the "thing" exposed by the iterator
- and pointers are treated as simple values, and no longer linked
to their pointee type; rather we handle the twist regarding
STL const_iterator direcly (it defines a non const value_type,
which is sensible from the STL point of view, but breaks our
generic iterator wrapping mechanism)
...in an attempt to resolve the deeply nested problems encountered
while building an iterator pipeline for the Dispatcher. It seems
that I was sloppy some years ago and just "bashed them into submission",
thereby mixing up two different meanings of "value_type"
Moreover I seemingly implemented the same helper trait template twice,
so the first step is to switch all usages to meta::TypeBinding
To complete the mock setup, the next step would be to extend the GenNode-based spec langage
to allow defining prerequisite Mock-JobTickets. Setting this up seems rather straight forward --
however, defining a simple testcase to cover this extension runs into surprisingly tricky problems..
- for one, the singleValIterator from Itertools has serious difficulties handling references
- but even more surprising, it seems impossible to make the "prerequisites iterator"
fit into the Tree-Explorer framework (which I intend to use as replacement
for the monadic approach)
after some extended analysis of generic types and template instances,
it seems that not TreeExplorer as such is the primary problem, but rather
there is a conceptual mismatch somewhere deep down in Itertools or Iter-Adapter
By reasoning and analysis I conclude that the differentiation into
multiple channels is likely misplaced in JobTicket; it belongs ratther
into the Segment and should provide a suitable JobTicket for each ModelPort
Handling of prerequisites also needs to be reshaped entirely after
switching to a pipeline builder for the Job-planning pipeline; as
preliminary access point, just add an iterator over the immediate
prerequisites, thereby shifting the exploration mechanism entirely
out of the JobTicket implementation
Testcase: A simple Sementation with a single and bounded Segment
As aside, figured out how to unpack an iterator such as to
tie a fixed number of references through a structural binding:
auto const& [s1,s2,s3] = seqTuple<3> (mockSegs.eachSeg());
...now able to build a mock segmentation which issues dummy jobs,
and is wired such as to verify the right job is invoked for each segment.
And this allows to build and verify the Dispatcher,
without being able to invoke actual render jobs yet.
- only the parts actually touched by the algo will be re-allocated
- when a segment is split, the clone copies carry on all data
Library: add function to check for a bare address (without type info)
This macro has turned out to be quite useful in cases
where a generic setup / algorithm / builder need to be customised
with λ adaptors for binding to local or custom types. It relies
on the metafunctions defined in lib/meta/function.hpp to match
the signature of "anything function-like"; so this seems the
proper place to provide that macro alongside
...this is something I should have done since YEARS, really...
Whenever working with symbolically represented data, tests
typically involve checking *hundreds* of expected results,
and thus it can be really hard to find out where the
failure actually happens; it is better for readability
to have the expected result string immediately in the
test code; now this expected result can be marked
with a user-defined literal, and then on mismatch
the expected and the real value will be printed.
The algorithm coded thus far turns out to be rather generic,
and thus it can be rewritten into a template, while all specific parts
are supplied as λ-Functors.
- instead of Time we use a generic ordering type
- the Iterator is likewise turned into a template parameter
- all the operations are directly supplied as functor types
- C++17 is able to pick up all those λ-Types from the ctor call
This change looks like "low hanging fruit"; the legibility of the code
is not seriously hampered, yet we get the benefit to test this rather
technical piece of logic by an isolated test (which for now is the
primary motivation), and we can hope to re-use it for similar tasks.
There are 12 distinct cases regarding the orientation of two intervals;
The Segmentation::splitSplice() operation shall insert a new Segment
and adjust / truncate / expand / split / delete existing segments
such as to retain the *Invariant* (seamless segmentation covering
the complete time axis)
right now we're lacking a complete working implementation of render node invocation,
and thus the Dispatcher implementation can only be verified with the help
of mocked jobs. However, at least a preliminary implementation of tagging the
invocation instance is available, and thus we're able to verify that
a given job instance indeed belongs to and is "backed" by a specific JobTicket.
This is prerequisite for building up a (likewise mocked) Fixture datastructure,
and this in turn was meant to form the basis for attacking an actual Scheduler
implementation, followed by a real render node invocation.
- can now create a Job from JobTicket::NIL
- on invocation this Job will to nothing
Only when the first real output backend is implemented,
we can decide if this simplistic implementation is enough,
or if an empty output must be explicitly generated...
* using a simplified preliminary implementation of hash chaining (see #1293)
* simplistic implementation of hashing for time values (half-rotation)
* for now just hashing the time into the upper part of the LUID
Maybe we can even live with that implementation for some time,
depending on how important uniform distribution of hash values is
for proper usage of the frame cache.
Needless to say, various further fine points need more consideration,
especially questions of portability (32bit anyone?). Moreover, since
frame times are typically quantised, the search space for the hashed
time values is drastically reduced; conceivably we should rather
research and implement a good hash function for 128bit and then combine
all information into a single hash key....
...using the MockJobTicket setup as point of reference,
since the actual invocation of render nodes will only be drafted
later in this "Vertical Slice" integration effort...
- introduce a JobTicket::NOP (null-object pattern)
- assuming that the function splitSplice() will retain complete coverage allways
Remark:
`Fixture::getPlaylistForRender()` is a leftover from the very early implementation drafts.
This function was more or less based on the way Cinelerra works; it is clear by now
that Lumiera can not possibly work this way, given that we'll build a low-level model
and dispatch precompiled render jobs....
The Fixture and the low-level model backbone deserve a distinct namespace on their own.
Since it's built by the Builder from the Session contents, and also used by the frame dispatch,
we can expect dependence on some types from Steam-Layer, and thus this namespace
needs to reside in Steam-Layer rather, while the actual low-level Model
might become part of Vault-Layer, creating a hierarchy of data structures.
(Remark: likely also the session related namespaces will need a reorganisation)
The idea is to escape a "design deadlock" by using a test-driven prototype
implementation of the data structure to back a further development
of the Dispatcher and Scheduler implementation, which then can be used
to gradually elaborate and switch over to an actual implementation
data structure
...requires a first attempt towards defining a `JobTiket`.
This turns out quite tricky, due to using those `LinkedElements`
(intrusive single linked list), which requires all added records
actually to live elsewhere. Since we want to use a custom allocator
later (the `AllocationCluster`), this boils down to allocating those
records only when about to construct the `JobTicket` itself.
What makes matters even worse: at the moment we use a separate spec
per Media channel (maybe these specs can be collapsed later non).
And thus we need to pass a collection -- or better an iterator
with raw specs, which in turn must reveal yet another nested
sequence for the prerequisite `JobTickets`.
Anyhow, now we're able at least to create an empty `JobTicket`,
backed by a dummy `JobFunctor`....
Looks like we'll actually retain and use this low-level solution
in cases where we just can not afford heap allocations but need
to keep polymorphic objects close to one another in memory.
Since single linked lists are filled by prepending, it is rather
common to need the reversed order of elements for traversal,
which can be achieved in linear time.
And while we're here, we can modernise the templated emplacement functions
- build the reworked Job-planning pipeline more or less from scratch
- back that with mocked `Dispatcher` and `JobTicket`
- then transfer this into a `RenderDrive`, which can be tested as well
- could continue then to a `CalcStream` integration test....
- introduce a new entity: RenderDrive
- it supersedes the CalcPlanCalculation, but is managed by CalcStream
- moreover, the RenderDrive will house a IterTreeExplorer-Pipeline
- define the concerns and relationships more clearly (see Drawing)
- prerequisite to disentangle the Job-planning "mechanics"
- decision: the Monad-style iteration framework will be abandoned
- the job-planning will be recast in terms of the iter-tree-explorer
- job-planning and frame dispatch will be disentangled
- the Scheduler will deliberately offer a high-level interface
- on this high-level, Scheduler will support dependency management
- the low-level implementation of the Scheduler will be based on Activity verbs
This finishes a long lasting effort to rework the top-level of the Lumiera GTK UI,
to adapt to GTK-3 and the new asynchronous message based architecture.
Special credits and thanks to
* Joel Holdsworth
* Stefan Kangas
Without their relentless foundational work, the Lumiera UI could
never be where it is now. Even if some code was rewritten and several
parts of the old GTK-2 implementation are now obsolete, numerous ideas
solutions and inspirations were drawn from those early contributions
and live on as part of the reworked GUI.
for sake of developement of the timeline body drawing code,
several tweaks were added to make the impact of the styling stand
out clearly. This changeset removes all those tweaks and restors
the code to intended neutral behaviour
Moreover, the cursom drawing of the timeline now requires some
basic aids to be present in the stylesheet, otherwise the track structure
will not be visible. Thus add some minimalistic styling to the
"light-theme-complement"-stylesheet, mostly based on the usual
predefined theme colours and some box-shadow settings.
This is by no means an adequate graphical solution,
yet it should be enough to get on with coding....
check private notes and mindmap and fix some remaining minor inconsistencies,
notably the calculations in the overlay renderer in the `BodyCanvasWidget`.
Not sure if the initial window width is used properly for calibration
of ZoomWindow "pixel width" base setting.
Follow-up to the layout logic established with this commit:
09714cfe28
An extended round of rebuilding and reworking the global UI structures
can be concluded now. A flexible recursive structure of Tracks has
been implemented for the new Timeline-UI, allowing to contro all
relevant aspects of structure composition by **Diff Messages** sent
up from the Steam-Layer into the UI.
Moreover, the ability to control the custom drawing code through regular
**CSS style rules** has been demonstrated, allowing for seamless integration
of Lumiera UI elements with the existing desktop theme.
This completes the initial implementation round for the TrackHead.
- arrangement and layout for nested sub-Tracks is now settled
- a graphical representation of scope nesting was implemented
Postponed for later...
- still some minor discrepancies on synchronisation of vertical space
between TrackHead and custom drawing in the body (off-by-one?)
- Expanding / Collapsing of Tracks
- Implement actual Controls to influence the Scope, e.g. Volume, Mix-Mode
- Dynamically indicate selection and Muting on the structure display
While by default the Cairo Context is scaled to device units,
we must not assume that the given Context is unscaled; rather
it might have been deliberately scaled...
A notable example is the Gtk Inspector, which offers a "Magnifier" feature
- pick up all relevant values from CSS
- also control the width of the StaveBracket
- observe the given overall height
Moreover, complete documentation drawing in Inkscape
and add a page to the TiddlyWiki, describing the principles
underlying this design and construction.
.timeline__head : The complete header container on the left side
.timeline__navi : navigation control at top
.timeline__pbay : container holding the patchbay on the left side
.fork__head : each individual TrackHeadWidget (possibly nested)
.fork__control : container for the control components for each track scope
.fork__bracket : the StaveBracket drawing to indicate the nesting structure
The tricky part is to derive the anchor point for the upper
and the lower cap of the bracket, taking into account possible padding.
There seems to be a bug hidden somewhere in this logic, since the
line turns out too short at the lower end....?
According to the documentation, we should be able to get a Pango
font description from the CSS style context, and from there we should
be able to retrieve a font-size specification (and a DPI for the display)
Running this experimental code yields a font-size value of 9pt,
leading to a scale factor of about ~6, which seems plausible.
after weighting in the pro, and cons,
I decided to follow the standard path and pick up values
for each StaveBracket instance individually from Gtk::StyleContext,
assuming that the GTK framework will care for caching and performance
...as it turned out, it suffices just to state desired behaviour explicitly
- make the StaveBracket expand=false
- declare the HeadControl expand=true
The reason lies in the fact that on leaf-tracks there are just two
adjacent cells in a single row, lacking any exterior source of layout guidance.
...just drawing a marker cross for now to indicate allocated size;
speaking of size -- GTK sometimes expands allocation horizontally,
while we'd prefer an absolutely fixed size for the purpose at hand.
Intention is to shift focus of development work down towards Player and Scheduler soon.
However, since the timeline display saw substantial improvement it seems prudent
to finish work on some open ends, notably
- the track head structure
- the drawing and styling code
While content rendering for Clip widgets can safely be postponed, regarding the TrackHeadWidget,
where custom drawing is planned to display a structural outline of the nested scopes, some
ground work should be completed to make those plannings explicit.
This both demonstrates the layout of the `TrackHeadWidget`
and puts `ElementBoxWidget` into intended use to indicate
the scope of a track and to provide a placement icon and
an expander/collapser button.
see #1018
see #1219
Layout calculation and balancing between body and track header
now works reasonably well, labels are placed properly and the
calculated layout remains stable when changing window size,
connected panes and scrollbars behave as expected now.
Quite a feat!
...to properly account for
- the "prelude" padding above the root track
- overview rulers located into a fixed separate canvas at top
- slope-down and slope-up borders around nested tracks.
After testing, results seem now to be accurate up to ± 1px
Finally (sigh)
As it turns out, we need to take the actual allocation of the cells
within the grid explicitly into account: combined cells will report
their extension for each of the underlying cells, thus leading to
excessively overestimated measurements.
So we now calculate the overall height based on the actual structure
- first row holds the label
- left column below used as expander
- right column holds individual content
Remaining problems:
- height of ruler canvas at top not taken into account (11px in this example)
- start of sub-track headers not synchronised with start of sub-tracks
in the body area
...seemingly the allocation of grid cells in the `TrackHeadWidget`
is not quite correct yet: even when there are nested sub-tracks,
we always need another row to hold the controls corresponding to
the track itself and the whole scope. And this row is also what
should be adjusted to match the vertical extension of the content
area.
As it turns out, the whole topic how to handle collapsed tracks
was not even considered yet; the calculation of the "track profile"
would need to be reworked to accommodate collapsed tracks, see: #1265
Sometimes, fractional seconds in the ZoomWindow metric can build up
to numerator and denominator values in the 64-bit integer domain.
Thus the division and truncation of the Window pixel with value
must be done in int64_t, while the result value is then
guaranteed to be a small integer < 100000
This caused the canvas to flicker and jump in size and the
resulting scrollbar change caused various cascading effects
on the further layout calculations...
Note: the actual root cause, why this re-entrance happens,
is due to another obvious numerics bug not yet solved.
Here, the canvas width was suddenly set to zero, causing
the scrollbar position to change and thus the ZoomWindow
to re-fire the structure change signal.
However, such invalidation of previously established baseline
values can never be totally excluded in advanced layout calculations,
and thus the evaluation mechanism must be prepared and re-triggered
to start over, until a stable layout is achieved.
- rearrange cell content and disable auto-expand to prevent
the content area from becoming oversized initially
- fix autocompletion error in signal binding,
causing segfault when moving the scrollbar
attempting to get the vertical space allocation in header and content area
synchronised; previously we conflated the content size and additional
padding, but even after distinguishing both, we still get a cyclic
dependency, leading to progressive increasing of allocated size...
After quite some tinkering, instead of extending the DisplayManager interface,
I now prefer to treat this connection rather as an intricate implementation detail:
The TimelineLayout implementatino now provides two translation functions,
which are directly wired as slots from the Signals emitted by moving the
hand of the scrollbar; the idea is that these functions mutate the ZoomWindow,
which then triggers a DisplayEvaltuation, which in turn causes the
drawing code to pick up and translate back the new metric and position.
Results look promising, insofar the DisplayEvaluation is now triggered
repeatedly, and the actual window width in pixel is propagated;
however, the response of the layout code is seemingly random at times,
the allocated height grows monontonously and the code Segfaults when
moving the scrollbar...
this makes the arrangement more symmetric and natural
and also makes the overview ruler scroll alongside the content pane,
thereby creating the (intended) impression of one uniform layout space
As it turns out, this happens as side-effect from the workaround 2019-08-22
fc5eaf857c
Obviously, just set_size() on the canvas is not sufficient for
GTK actually to establish a size-request (seemingly the canvas counts
as /empty/ and only real widgets would make a difference).
However, since the ruler canvas is directly placed into the box,
and not adapted by a ScrolledWindow, explicitly set_size_request()
also causes the enclosing Box to "inherit" this minimal required size,
thereby also spreading out the BodyCanvasWidget beyond the size
actually available. GTK handles this situation by hard-clipping
on the right side, which causes the vertical scrollbar to disappear
and keeps the horizontal scrollbar disabled (since nominally it still
spans the whole size available, even while this size is then clipped
subsequently).
This changeset adds a lot of debug printing and demonstrates this
behaviour by setting only a minimal horizontal size_request, so that
the window is no longer expanded and clipped.
This does not really change the logic of the DisplayEvaluation mechanism,
but makes it much more flexible: instead of having two hard wired components,
the DisplayEvaluation visitor now holds a collection of LayoutElement pointers.
This way, the Layout manager itself (on behalf of the ZoomWindow) can
participate in the process, and on activation will now establish the
window width in pixel.
This works now insofar the drawing on the canvas is adapted coherently;
however something with the setup of the Scrollbar is still not quite right;
some time ago I recall the scrollbar worked, but now it is blocked
and the canvas just clips to the right side.
It is now tied to the start of ZoomWindow::overallSpan(),
thereby defining the (technical) pixel coordinates within the window
and for drawing on the canvas to be always positive. Whenever ZoomWindow
re-calibrates, it's change signal will trigger, causing the
TimelineLayout to perform a new DisplayEvaluationPass,
which in turn prompts all embedded widgets to readjust
their positions accordingly.