By reasoning and analysis I conclude that the differentiation into
multiple channels is likely misplaced in JobTicket; it belongs ratther
into the Segment and should provide a suitable JobTicket for each ModelPort
Handling of prerequisites also needs to be reshaped entirely after
switching to a pipeline builder for the Job-planning pipeline; as
preliminary access point, just add an iterator over the immediate
prerequisites, thereby shifting the exploration mechanism entirely
out of the JobTicket implementation
Testcase: A simple Sementation with a single and bounded Segment
As aside, figured out how to unpack an iterator such as to
tie a fixed number of references through a structural binding:
auto const& [s1,s2,s3] = seqTuple<3> (mockSegs.eachSeg());
...now able to build a mock segmentation which issues dummy jobs,
and is wired such as to verify the right job is invoked for each segment.
And this allows to build and verify the Dispatcher,
without being able to invoke actual render jobs yet.
- only the parts actually touched by the algo will be re-allocated
- when a segment is split, the clone copies carry on all data
Library: add function to check for a bare address (without type info)
This macro has turned out to be quite useful in cases
where a generic setup / algorithm / builder need to be customised
with λ adaptors for binding to local or custom types. It relies
on the metafunctions defined in lib/meta/function.hpp to match
the signature of "anything function-like"; so this seems the
proper place to provide that macro alongside
...this is something I should have done since YEARS, really...
Whenever working with symbolically represented data, tests
typically involve checking *hundreds* of expected results,
and thus it can be really hard to find out where the
failure actually happens; it is better for readability
to have the expected result string immediately in the
test code; now this expected result can be marked
with a user-defined literal, and then on mismatch
the expected and the real value will be printed.
The algorithm coded thus far turns out to be rather generic,
and thus it can be rewritten into a template, while all specific parts
are supplied as λ-Functors.
- instead of Time we use a generic ordering type
- the Iterator is likewise turned into a template parameter
- all the operations are directly supplied as functor types
- C++17 is able to pick up all those λ-Types from the ctor call
This change looks like "low hanging fruit"; the legibility of the code
is not seriously hampered, yet we get the benefit to test this rather
technical piece of logic by an isolated test (which for now is the
primary motivation), and we can hope to re-use it for similar tasks.
There are 12 distinct cases regarding the orientation of two intervals;
The Segmentation::splitSplice() operation shall insert a new Segment
and adjust / truncate / expand / split / delete existing segments
such as to retain the *Invariant* (seamless segmentation covering
the complete time axis)
right now we're lacking a complete working implementation of render node invocation,
and thus the Dispatcher implementation can only be verified with the help
of mocked jobs. However, at least a preliminary implementation of tagging the
invocation instance is available, and thus we're able to verify that
a given job instance indeed belongs to and is "backed" by a specific JobTicket.
This is prerequisite for building up a (likewise mocked) Fixture datastructure,
and this in turn was meant to form the basis for attacking an actual Scheduler
implementation, followed by a real render node invocation.
- can now create a Job from JobTicket::NIL
- on invocation this Job will to nothing
Only when the first real output backend is implemented,
we can decide if this simplistic implementation is enough,
or if an empty output must be explicitly generated...
* using a simplified preliminary implementation of hash chaining (see #1293)
* simplistic implementation of hashing for time values (half-rotation)
* for now just hashing the time into the upper part of the LUID
Maybe we can even live with that implementation for some time,
depending on how important uniform distribution of hash values is
for proper usage of the frame cache.
Needless to say, various further fine points need more consideration,
especially questions of portability (32bit anyone?). Moreover, since
frame times are typically quantised, the search space for the hashed
time values is drastically reduced; conceivably we should rather
research and implement a good hash function for 128bit and then combine
all information into a single hash key....
...using the MockJobTicket setup as point of reference,
since the actual invocation of render nodes will only be drafted
later in this "Vertical Slice" integration effort...
- introduce a JobTicket::NOP (null-object pattern)
- assuming that the function splitSplice() will retain complete coverage allways
Remark:
`Fixture::getPlaylistForRender()` is a leftover from the very early implementation drafts.
This function was more or less based on the way Cinelerra works; it is clear by now
that Lumiera can not possibly work this way, given that we'll build a low-level model
and dispatch precompiled render jobs....
The Fixture and the low-level model backbone deserve a distinct namespace on their own.
Since it's built by the Builder from the Session contents, and also used by the frame dispatch,
we can expect dependence on some types from Steam-Layer, and thus this namespace
needs to reside in Steam-Layer rather, while the actual low-level Model
might become part of Vault-Layer, creating a hierarchy of data structures.
(Remark: likely also the session related namespaces will need a reorganisation)
The idea is to escape a "design deadlock" by using a test-driven prototype
implementation of the data structure to back a further development
of the Dispatcher and Scheduler implementation, which then can be used
to gradually elaborate and switch over to an actual implementation
data structure
...requires a first attempt towards defining a `JobTiket`.
This turns out quite tricky, due to using those `LinkedElements`
(intrusive single linked list), which requires all added records
actually to live elsewhere. Since we want to use a custom allocator
later (the `AllocationCluster`), this boils down to allocating those
records only when about to construct the `JobTicket` itself.
What makes matters even worse: at the moment we use a separate spec
per Media channel (maybe these specs can be collapsed later non).
And thus we need to pass a collection -- or better an iterator
with raw specs, which in turn must reveal yet another nested
sequence for the prerequisite `JobTickets`.
Anyhow, now we're able at least to create an empty `JobTicket`,
backed by a dummy `JobFunctor`....
Looks like we'll actually retain and use this low-level solution
in cases where we just can not afford heap allocations but need
to keep polymorphic objects close to one another in memory.
Since single linked lists are filled by prepending, it is rather
common to need the reversed order of elements for traversal,
which can be achieved in linear time.
And while we're here, we can modernise the templated emplacement functions
- build the reworked Job-planning pipeline more or less from scratch
- back that with mocked `Dispatcher` and `JobTicket`
- then transfer this into a `RenderDrive`, which can be tested as well
- could continue then to a `CalcStream` integration test....
- introduce a new entity: RenderDrive
- it supersedes the CalcPlanCalculation, but is managed by CalcStream
- moreover, the RenderDrive will house a IterTreeExplorer-Pipeline
- define the concerns and relationships more clearly (see Drawing)
- prerequisite to disentangle the Job-planning "mechanics"
- decision: the Monad-style iteration framework will be abandoned
- the job-planning will be recast in terms of the iter-tree-explorer
- job-planning and frame dispatch will be disentangled
- the Scheduler will deliberately offer a high-level interface
- on this high-level, Scheduler will support dependency management
- the low-level implementation of the Scheduler will be based on Activity verbs
This finishes a long lasting effort to rework the top-level of the Lumiera GTK UI,
to adapt to GTK-3 and the new asynchronous message based architecture.
Special credits and thanks to
* Joel Holdsworth
* Stefan Kangas
Without their relentless foundational work, the Lumiera UI could
never be where it is now. Even if some code was rewritten and several
parts of the old GTK-2 implementation are now obsolete, numerous ideas
solutions and inspirations were drawn from those early contributions
and live on as part of the reworked GUI.
for sake of developement of the timeline body drawing code,
several tweaks were added to make the impact of the styling stand
out clearly. This changeset removes all those tweaks and restors
the code to intended neutral behaviour
Moreover, the cursom drawing of the timeline now requires some
basic aids to be present in the stylesheet, otherwise the track structure
will not be visible. Thus add some minimalistic styling to the
"light-theme-complement"-stylesheet, mostly based on the usual
predefined theme colours and some box-shadow settings.
This is by no means an adequate graphical solution,
yet it should be enough to get on with coding....
check private notes and mindmap and fix some remaining minor inconsistencies,
notably the calculations in the overlay renderer in the `BodyCanvasWidget`.
Not sure if the initial window width is used properly for calibration
of ZoomWindow "pixel width" base setting.
Follow-up to the layout logic established with this commit:
09714cfe28
An extended round of rebuilding and reworking the global UI structures
can be concluded now. A flexible recursive structure of Tracks has
been implemented for the new Timeline-UI, allowing to contro all
relevant aspects of structure composition by **Diff Messages** sent
up from the Steam-Layer into the UI.
Moreover, the ability to control the custom drawing code through regular
**CSS style rules** has been demonstrated, allowing for seamless integration
of Lumiera UI elements with the existing desktop theme.
This completes the initial implementation round for the TrackHead.
- arrangement and layout for nested sub-Tracks is now settled
- a graphical representation of scope nesting was implemented
Postponed for later...
- still some minor discrepancies on synchronisation of vertical space
between TrackHead and custom drawing in the body (off-by-one?)
- Expanding / Collapsing of Tracks
- Implement actual Controls to influence the Scope, e.g. Volume, Mix-Mode
- Dynamically indicate selection and Muting on the structure display
While by default the Cairo Context is scaled to device units,
we must not assume that the given Context is unscaled; rather
it might have been deliberately scaled...
A notable example is the Gtk Inspector, which offers a "Magnifier" feature
- pick up all relevant values from CSS
- also control the width of the StaveBracket
- observe the given overall height
Moreover, complete documentation drawing in Inkscape
and add a page to the TiddlyWiki, describing the principles
underlying this design and construction.
.timeline__head : The complete header container on the left side
.timeline__navi : navigation control at top
.timeline__pbay : container holding the patchbay on the left side
.fork__head : each individual TrackHeadWidget (possibly nested)
.fork__control : container for the control components for each track scope
.fork__bracket : the StaveBracket drawing to indicate the nesting structure
The tricky part is to derive the anchor point for the upper
and the lower cap of the bracket, taking into account possible padding.
There seems to be a bug hidden somewhere in this logic, since the
line turns out too short at the lower end....?
According to the documentation, we should be able to get a Pango
font description from the CSS style context, and from there we should
be able to retrieve a font-size specification (and a DPI for the display)
Running this experimental code yields a font-size value of 9pt,
leading to a scale factor of about ~6, which seems plausible.
after weighting in the pro, and cons,
I decided to follow the standard path and pick up values
for each StaveBracket instance individually from Gtk::StyleContext,
assuming that the GTK framework will care for caching and performance
...as it turned out, it suffices just to state desired behaviour explicitly
- make the StaveBracket expand=false
- declare the HeadControl expand=true
The reason lies in the fact that on leaf-tracks there are just two
adjacent cells in a single row, lacking any exterior source of layout guidance.
...just drawing a marker cross for now to indicate allocated size;
speaking of size -- GTK sometimes expands allocation horizontally,
while we'd prefer an absolutely fixed size for the purpose at hand.
Intention is to shift focus of development work down towards Player and Scheduler soon.
However, since the timeline display saw substantial improvement it seems prudent
to finish work on some open ends, notably
- the track head structure
- the drawing and styling code
While content rendering for Clip widgets can safely be postponed, regarding the TrackHeadWidget,
where custom drawing is planned to display a structural outline of the nested scopes, some
ground work should be completed to make those plannings explicit.
This both demonstrates the layout of the `TrackHeadWidget`
and puts `ElementBoxWidget` into intended use to indicate
the scope of a track and to provide a placement icon and
an expander/collapser button.
see #1018
see #1219
Layout calculation and balancing between body and track header
now works reasonably well, labels are placed properly and the
calculated layout remains stable when changing window size,
connected panes and scrollbars behave as expected now.
Quite a feat!
...to properly account for
- the "prelude" padding above the root track
- overview rulers located into a fixed separate canvas at top
- slope-down and slope-up borders around nested tracks.
After testing, results seem now to be accurate up to ± 1px
Finally (sigh)
As it turns out, we need to take the actual allocation of the cells
within the grid explicitly into account: combined cells will report
their extension for each of the underlying cells, thus leading to
excessively overestimated measurements.
So we now calculate the overall height based on the actual structure
- first row holds the label
- left column below used as expander
- right column holds individual content
Remaining problems:
- height of ruler canvas at top not taken into account (11px in this example)
- start of sub-track headers not synchronised with start of sub-tracks
in the body area
...seemingly the allocation of grid cells in the `TrackHeadWidget`
is not quite correct yet: even when there are nested sub-tracks,
we always need another row to hold the controls corresponding to
the track itself and the whole scope. And this row is also what
should be adjusted to match the vertical extension of the content
area.
As it turns out, the whole topic how to handle collapsed tracks
was not even considered yet; the calculation of the "track profile"
would need to be reworked to accommodate collapsed tracks, see: #1265
Sometimes, fractional seconds in the ZoomWindow metric can build up
to numerator and denominator values in the 64-bit integer domain.
Thus the division and truncation of the Window pixel with value
must be done in int64_t, while the result value is then
guaranteed to be a small integer < 100000
This caused the canvas to flicker and jump in size and the
resulting scrollbar change caused various cascading effects
on the further layout calculations...
Note: the actual root cause, why this re-entrance happens,
is due to another obvious numerics bug not yet solved.
Here, the canvas width was suddenly set to zero, causing
the scrollbar position to change and thus the ZoomWindow
to re-fire the structure change signal.
However, such invalidation of previously established baseline
values can never be totally excluded in advanced layout calculations,
and thus the evaluation mechanism must be prepared and re-triggered
to start over, until a stable layout is achieved.
- rearrange cell content and disable auto-expand to prevent
the content area from becoming oversized initially
- fix autocompletion error in signal binding,
causing segfault when moving the scrollbar
attempting to get the vertical space allocation in header and content area
synchronised; previously we conflated the content size and additional
padding, but even after distinguishing both, we still get a cyclic
dependency, leading to progressive increasing of allocated size...
After quite some tinkering, instead of extending the DisplayManager interface,
I now prefer to treat this connection rather as an intricate implementation detail:
The TimelineLayout implementatino now provides two translation functions,
which are directly wired as slots from the Signals emitted by moving the
hand of the scrollbar; the idea is that these functions mutate the ZoomWindow,
which then triggers a DisplayEvaltuation, which in turn causes the
drawing code to pick up and translate back the new metric and position.
Results look promising, insofar the DisplayEvaluation is now triggered
repeatedly, and the actual window width in pixel is propagated;
however, the response of the layout code is seemingly random at times,
the allocated height grows monontonously and the code Segfaults when
moving the scrollbar...
this makes the arrangement more symmetric and natural
and also makes the overview ruler scroll alongside the content pane,
thereby creating the (intended) impression of one uniform layout space
As it turns out, this happens as side-effect from the workaround 2019-08-22
fc5eaf857c
Obviously, just set_size() on the canvas is not sufficient for
GTK actually to establish a size-request (seemingly the canvas counts
as /empty/ and only real widgets would make a difference).
However, since the ruler canvas is directly placed into the box,
and not adapted by a ScrolledWindow, explicitly set_size_request()
also causes the enclosing Box to "inherit" this minimal required size,
thereby also spreading out the BodyCanvasWidget beyond the size
actually available. GTK handles this situation by hard-clipping
on the right side, which causes the vertical scrollbar to disappear
and keeps the horizontal scrollbar disabled (since nominally it still
spans the whole size available, even while this size is then clipped
subsequently).
This changeset adds a lot of debug printing and demonstrates this
behaviour by setting only a minimal horizontal size_request, so that
the window is no longer expanded and clipped.
This does not really change the logic of the DisplayEvaluation mechanism,
but makes it much more flexible: instead of having two hard wired components,
the DisplayEvaluation visitor now holds a collection of LayoutElement pointers.
This way, the Layout manager itself (on behalf of the ZoomWindow) can
participate in the process, and on activation will now establish the
window width in pixel.
This works now insofar the drawing on the canvas is adapted coherently;
however something with the setup of the Scrollbar is still not quite right;
some time ago I recall the scrollbar worked, but now it is blocked
and the canvas just clips to the right side.
It is now tied to the start of ZoomWindow::overallSpan(),
thereby defining the (technical) pixel coordinates within the window
and for drawing on the canvas to be always positive. Whenever ZoomWindow
re-calibrates, it's change signal will trigger, causing the
TimelineLayout to perform a new DisplayEvaluationPass,
which in turn prompts all embedded widgets to readjust
their positions accordingly.
Note: changing behaviour of TimeSpan to possibly flip start and end,
and also to use Offset as Offset and then re-orient,
since this seems the least surprising behaviour.
These changes carry over into changed default and limiting
on ZoomWindow constructor and various mutators, and most
notably shifting the time span always into allowed domain.
...the implementation was way too naive; in some cases we could go
into an infinite loop. In the end, using Newton approximation was not
necessary (and thus there is no loop anymore), but it helped me get
at a much better solution with very small error margin on average case.
All these corner cases are obviously "academic" to some degree,
but it turns out there is no clear-cut point where you'd be able
just so set a limit and be sure that fractional integer arithmetic
works flawless in all cases.
Thus the choice is
- give up (fractional) integers and work with floats and have to
deal with error accumulation
- or do something as chosen here, namely add a boundary zone, where
fractional integer arithmetic can be kept under control, while admitting
small errors, and in turn get the absolutely precise integers in all
everyday standard cases
The value used previously was too conservative, and prevented ZommWindow
from zooming out to the complete Time domain. This was due to missing the
Time::SCALE denominator, which increaded the limit by factor 1e6
In fact the code is able to handle even this extremely reduced limit,
but doing so seems over the top, since now detox() kicks in on several
calculations, leading to rather coarse grained errors.
Thus I decided to use a compromise: lower the limit only by factor 1000;
with typical screen pixel widths, we can reach the full time domain,
while most scaling and zoom calculations can be performed precisely,
without detox() kicking in. Obviously this change requires adjusting
a lot of the test case expectations, since we can now zoom out maximally.
As it turns out, the calculation path initially choosen for the mutateScale(Rat)
was needlessly indirect, and also duplicated several of the safeguards,
meanwhile implemented way better in conformWindowToMetric(Rat)
Thus, instead of relatively re-scaling the window, now we just
limit the given zoomFactor and pass it to conformWindowToMetric()
There is a built-in limitation, which now is even
lowered to 100000 pixels horizontally.
With the techniques introduced in this changeset, it seems possible
to support more -- yet this would be a case of unnecessary genricity;
handling such large numbers will drive more computations into the
danger zone, and doing so incurs cost in terms of testing and debugging.
Placing that into context, contemporary displays are not even 4K on
average, and it does not look likely even for cinema display to go
way beyond 8k -- so yes, I want display hardware with 100000 pixels!!
The key takeaway of this changeset:
- can calculate px = trunc(zoomFactor * duration) step wise,
even when the direct calculation would lead to wrap-around
- can safely adjust and fix the zoomFactor using Newton approximation
...even zooming out to span the complete time domain (~19000 years).
But only under the condition that the display window is sufficiently
large in terms of pixels, so we can handle the computation without
glitches.
This should not be a relevant limitation in practice, since a window
size of some 100 pixels is enough to handle Duration::MAX. Needless to add
that it's hard to imagine a media timeline of such tremendous size...
building on these Library changes, plus the safe-add function
developed some days ago, it is now possible to mark a large displacement
as `time::Offset`, and apply this to yield any valid time position,
even extreme negative values
...building on these Library changes, plus the safe-add function
developed some days ago, it is now possible to mark a large displacement
as `time::Offset`, and apply this to yield any valid time position,
even extreme negative values
The APIs for time quantisation were drafted in an early stage of the project
and then never followed-up. Especially Grid::gridAlign has no
real-world usage yet, and is only massaged in some tests.
When looking at QuantiserBasics_test, I was puzzled and led astray,
since this function suggests to materialise a continuous time into
a quantised time -- which it doesn't (there is another dedicated
function Quantiser::materialise() to that end); so, without engaging
into the discussion if this function is of any use, I'll hereby
choose a name better reflecting what it does.
This is a deep refactoring to allow to represent the distance
between all valid time points as a time::Offset or time::Duration.
By design this is possible, since Time::MAX was defined as 1/30 of
the maximum value technically representable as int64_t. However,
introducing a different limiter for offsets and durations turns
out difficult, due to the inconsistencies in the exiting hierarchy
of temporal entities. Which in turn seems to stem from the unfortunate
decision to make time entities immutable, see #1261
Since the limiter is hard wired into the `time::TimeValue` constructor,
we are forced to create a "backdoor" of sorts, to pass up values
with different limiting from child classes. This would not be so
much of a problem if calculations weren't forced to go through `TimeVar`,
which does not distinguish between time points and time durations.
This solution rearranges all checks to be performed now by time::Offset,
while time::Duration will only take the absolute value at construction,
based on the fact that there is no valid construction path to yield
a duration which does not go through an offset first.
Later, when we're ready to sort out the implementation base of time values
(see #1258), this design issue should be revisited
- either we'll allow derived classes explicitly to invoke the limiter functions
- or we may be able to have an automatic conversion path from clearly
marked base implementation types, in which case we wouldn't use the
buildRaw_() and _raw() "backdoor" functions any more...
While the calculation was already basically under control, I just was not content
with the achieved numeric precision -- and the fact that the test case in fact
misses the bar, making it difficult do demonstrate that the calculation
is not derailed. I just had the gut feeling that it must be somehow possible
to achieve an absolute error level, not just a relative error level of 1e-6
Thus I reworked this part into a generic helper function, see #1262
The end result is:
* partial failure. we can only ''guarantee'' the relative error margin of 1e-6
* but in most cases not out to the most extreme numbers, the sophisticated
solution achieves much better results way below the absolute error level of 1µ-Tick
Thus with using rational numbers, we have now a solution that is absolutely precise
in the regular case, and gradually introduces errors at the domain bound
but with an guaranteed relative error margin of 1e-6 (== Time::SCALE)
...now able to achieve the expected error bound of 1e-6
...this seems to be the worst-case for ZoomWindow::setVisiblePos(factor)
for extremely large timeline; seemingly not possible to achieve the
goal set for this special test case
...in a similar vein as done for the product calculation.
In this case, we need to check the dimensions carefully and pick
the best calculation path, but as long as the overall result can
be represented, it should be possible to carry out the calculation
with fractional values, albeit introducing a small error.
As a follow-up, I have now also refactored the re-quantisation
functions, to be usable for general requantisation to another grid,
and I used these to replace the *naive* implementation of the
conversion FSecs -> µ-Grid, which caused a lot of integer-wrap-around
However, while the test now works basically without glitch or wrap,
the window position is still numerically of by 1e-6, which becomes
quite noticeably here due to the large overall span used for the test.
...using a requantisation trick to cancel out some factors in the
product of two rational numbers, allowing to calculate the product
without actual multiplication of (dangerously large) numbers.
with these additional safeguards, the anchorWindowAtPosition()
succeeds without Integer-wrap, but the result is not fully correct
(some further calculation error hidden somewhere??)
- detailed documentation of known problematic behaviour
when working with rational fractions
- demonstrate the heuristic predicate to detect dangerous numbers
- add extensive coverage and microbenchmarks for the integer-logarithm
implementation, based on an example on Stackoverflow. Surprising result:
The std::ilog(double) function is of comparable speed, at least for
GCC-8 on Debian-Buster.
Especially rational numbers with large denominator can be insidious,
since they might cause numeric overflow on seemingly harmless operations,
like adding a small number.
A solution might be to *requantise* the number into a different,
way smaller denominator. Obviously this is a lossy operation;
yet a small and controlled numeric error is always better than
an uncontrolled numeric wrap-around.
- protection against negative numbers seems adequate
- a possible concern are handling of very large time spans
- definitively have to guard against "poisonous" fractions
(e.g. n / INT_MAX)
- some test definitions were simply numerically wrong
- changed some aspects of the specified behaviour, to be more consistent
+ scrolling is more liberal and always allows to extend canvas
+ setting window to a given duration expands around anchor point
This function allows to move the visible range such that it contains
a given time position; the relative location of this point within
the visible range however is in turn determined by relating it to
the current overall canvas: if we are close to the beginning, the
position is also located rather to the left side, and if we're
approaching the canvas end, the position tends to the right side...
(and yes, I am aware that the variant taking a rational number
can be derailed by causing internal numeric overflow, when passing
a maliciously crafted rational number, like INT_MAX-1 / INT_MAX )
Rearrange the internal mutator functions to follow a common scheme,
so that most of the setters can be implemented by simple forwarding.
Move the change-listener triggering up into the actual setters.
This makes further test cases pass
- verify_setup
- verify_calibration
...implying that the pixel width is now retained
and basic behaviour matches expectations
Since conformWindowToMetric() is always called prior to performing
the complete invariant-reestablishment sequence, we can even integrate
the rule for relative scaling into this central function, which
simplifies the mutation implementation significantly. Should
relative positioning go south, the following sanity checks
will push the window back into bounds.
With these changes, the verify_simpleUsage() test passes!
Extensive tests with corner cases soon highlighted this problem
inherent to integer calculations with fractional numbers: it is
possible to derail the calculation by numeric overflow with values
not excessively large, but using large numbers as denominator.
This problem is typically triggered by addition and subtraction,
where you'd naively not expect any problems.
Thus changed the approach in the normalisation function, relying
on an explicitly coded test rather, and performing the adjustment
only after conversion back to simple integral micro-tick scale.
Getting all those requirements translated into code turns out to be a challenging task;
and the usual ascent to handle such a situation is to define **Invariants**
in conjunction with a normalisation scheme; each manipulation will then be
translated into invocation of one of the three fundamental mutators,
and these in turn always lead into the common normalisation sequence.
__Invariants__
- oriented and non-empty windows
- never alter given pxWidth
- zoom metric factor < max zoom
- visibleWindow ⊂ Canvas
Writing this specification unveiled a limitation of our internal
time base implementation, which is a 64bit microsecond grid.
As it turns out, any grid based time representation will always
be not precise enough to handle some relevant time specifications,
which are defined by a divisor. Most notably this affects the precise
display of frame duration in the GUI, and even more relevant,
the sample accurate editing of sound in the timeline.
Thus I decided to perform the internal computation in ZoomWindow
as rational numbers, based on boost::rational
Note: implementation stubbed only, test fails
This ZoomWindow_test highlights again the question about the intended usage
of the Lumiera time entities. In which way do we want to perform time calculations,
and under which circumstances is it adequate to perform arithmetic on
raw time values?
These questions made me think about rather far reaching concerns regarding
subsidiarity and implicit or explicit usage context. Basically I could
reconfirm the design choices taken some years ago -- while I must admit
that the project is headed towards a way larger scale and more loose
coupling of the parts, than I could imagine several years ago, at the
time when the design started...
As a side note: we can not avoid that some knowledge about the time implementation
leaks out from the support lib; time codes themselves are tightly coupled
to the usage scenario within the session and can not be used as means
for implementing UI concerns. And the more generic time frameworks,
like std::chrono (as much as it is desirable to have some integration here)
will not be of any help for most of our specific usage patterns.
The reason is, for film editing we do not have a global time scale,
rather the truth is when the film starts....
implement the first test case: nudge the zoom factor
⟹ scale factor doubled
⟹ visible window reduced to half size
⟹ visible window placed in the middle of the overall range