Initially we assumed that »handling time« is largely a matter of computation.
''Time is just a value'' and can be treated with integral arithmetic, some
modulus computations and pre-defined constants.
This turned out to be a mistake. Anything related to time is intricate,
and it is essential to distinguish different meanings of "times"
- time values related to an internal computation framework have
implementation-defined meaning and should be ''marked as opaque''
- temporal data can be ''mapped to a grid scale'' — an essential step
for media processing, which however incurs information loss
- externally relevant time specifications are represented symbolically,
by translation into a ''Time Code''
Drawing from these insights, a framework for time handling has been established;
building in part on the low-level function style base implementation.
Exposing this base implementation as a C-library however is considered
dangerous, as it may lure into ''ad hoc'' computations, which are a major
source of inconsistencies and notorious defects in many media applications.
The `FixedFrameQuantiser` relied on three functions from the raw-time handling library.
Since this (and NTSC drop-frame) are the only usages, these functions
can be relocated into the implemntation translation unit `lib/time/quantiser.cpp`
On closer inspection, this reveals some room for improvements:
Instead of relying on raw-computation functions written in C,
we could rather revert the dependency and express these computations
in terms of our Time-entities, which are written in C++, are much more
systematic and provide consistency checks and protection against numeric
overflow, all integrated with linear arithmetic and concise notation.
After performing these rearrangements,
most of the functions can be collapsed into ''almost nothing''.
This was taken as opportunity to re-check and improve the remaining
implementation core of the `FixedFrameQuantiser` -- the handling of
extreme corner cases can be much improved, now representing the
"grid-local time" as `Offset`, which doubles the possible value range.
The reworked unit test shows that, with this change, now the limitation
happens prior to quantisation, meaning that we always get a grid-aligned
result, even in the most extreme corner cases.
...extract these functions and the associated test
from the low-level C time handling library and
document them with a dedicated C++ header and test.
''This is unfinished work'' —
the extracted functions provide only the low level computations;
actually, a specialised time quantisation or time code would be required.
------------
Note though,
after extracting these functions, the rest of the plain-C test
can be removed, since equivalent functionality is covered in
much more detail by the tests of the C++ time handling framework.
Notably this allows to get rid of the direct component accessor functions.
------------
__Remark__: the base implementation of many time conversion functions
and especially NTSC drop-frame was provided by Stefan Kangas
See:
6a44134833
While these function may seem superficially plausible,
I more and more come to the conclusion that offering such
function as ''basic building blocks'' is in itself an
ill-guided approach to handling of time entities.
Time is neither „just a number“ — nor does it „contain“ hours, minutes and seconds.
It is possible to ''represent'' it through a **time-code**, which incurs
a quantisation step and implies a reference grid.
Thus Lumiera ''should not offer'' a »basic time handling library«.
Doing so would be just an invitation to bypass proper time handling
and avoid the use of more demanding but also more adequate mental concepts.
So the next step will be to remove functions not deemed adequate, and
better directly inline the respective modulus based computations.
Other functions can be integrated into the respective implementation
translation units for time quantisation and timecode representation.
Indeed — this change set is kind of sad.
Because I still admire the design of the GAVL library,
and would love to use it for processing of raw video.
However, up to now, we never got to the point of actually
doing so. For the future, I am not sure if there remains
room to rely on lib-GAVL, since FFmpeg roughly covers
a similar ground (and a lot beyond that). And providing
a plug-in for FFmpeg is unavoidable, practically speaking.
So I still retain the nominal dependency on lib-GAVL
in the Build system (since it is still packaged in Debian).
But it is pointless to rely on this library just for an
external type-def `gavl_time_t`. We owe much to this
inspiration, but it can be expected that we'll wrap
these raw time-values into a dedicated marker type
soon, and we certainly won't be exposing any C-style
interface for time calculations in future, since
we do not want anyone to side-step the Lumiera
time handling framework in favour of working
„just with plain numbers“
NOTE: lib-GAVL hompage has moved to Github:
https://github.com/bplaum/gavl
After the Dummy-Player (interface and prototype implementation)
is removed, some further bits of debris can be sorted out.
The first version of the Timeline was mostly rewritten and
the obsolete parts were removed in 2023, yet a small number
of files was kept around as reference for later.
Some of these are no longer considered of any information value,
other ones were moved into the widget package (and further problematic
parts were annotated, but can be still used with GTK-3)
For the [ticket:1221 »Playback Vertical Slice«] one of the next steps
will be to define a way to pass buffers from the core to the UI.
The `DisplayService` and the `DummyPlayerService` where parts of an
early architecture study to see how such a flexible connection between
components in different layers can be accomplished.
The findings from this prototyping work helped to shape the design
of the actual `PlayService`...
As an example, the `PixbufDisplayer` needs packed RGB888 data,
while the `XvDisplayer` expects YUV (MPEG-style) pixels.
The research setup is not well equipped to handle any kind of content
or format negotiation; yet for the experimentation, the connections can
be wired as !SigC-Signals. After the preceding refactorings,
`DummyImageGenerator` can be configured to perform the conversion to YUV
only when necessary, and to use the working buffer flexibly.
When supplied with packed RGB pixel data, the display in the Gtk::Image
is now correct, and also handles layout and scaling appropriately.
- since we now use 32bit int arithmetic (which is faster),
we can also use the exact value of the MPEG / Rec.601 coefficients
- and also the generation of the NTSC colour bar pattern
can be written much simpler and cleare with C++
This is a first step towards the ability to produce several different output formats...
Refactor the code to separate
- the double buffering
- the actual image generation, which works in RGB
- the conversion routine
Furthermore, replace unsigned char by std::byte
and introduce std::array and structured binding
to avoid many usages of pointers; hopefully this
makes the intention of the code clearer.
Verified and cross-checked the actual converion logic;
in fact this is a conversion to "YUV" as used by MPEG,
which in more precise terms is Y'CrCb with Rec.601 colour space
and a scan range limitation (16...235) on the Luma component.
Generally speaking, this experiment shows that we need some additional know-how
regarding the XVideo standard. And we should re-think the means of integration.
From some further debugging end experimenting with this code,
the following conlusions can be drawn:
- the code retrieves the GDK-Window, to which the widget was mapped
- it uses this to access an underlying X-Window
- seemingly the docking panel uses a separate X-Window, which also
hosts the header area of the panel
- this whole underlying X-Window is treated with some compositing method,
presumably (just guessing from the code) we use keying with a
marker-colour. This explains why the whole area of the panel
is no longer updated regularly
- furthermore, we need to take into account that the actual display widget
area uses some part of this window, which can be found out from
the VideoWidget's ''Allocation''
- when correcting the origin of the video display by using this
allocation's origin, at least the display of the video is precisely
at the right location and size
Furthermore, the code takes quite some shortcut and basically
looks for one specific display format, and uses the corresponding
configuration for the "port" it got.
This Format has the Abbreviation "YUY2" (packed)
''it basically works...''
TODO
- the image is updated only when moving the mouse over the widget
- calculation of window decoration is not correct
- a strange transparent zone appears in the UI directly above the widget
The new solution is to (still) use a Gdk::Pixbuf,
because in GTK-3 the more modern alternatives are not yet provided.
But we can pass this Pixbuf directly to Gtk::Image, instead of trying
to do low-level drawing on the X-Window.
Other than that, I more or less transformed the old C-style code
into the corresponding calls in Gdkmm.
__SUCCESS__: we get **some display**
__TODO__ : the displayed image is Garbage, which means that the pixel layout is not correct
In cases where none of the preferred, library based approaches works,
we can attempt to provide a ''poor man's video display'' by placing the
image data into a bitmap, which can be rendered by the UI toolkit.
The existing solution for such bitmap content is the GDK Pixmap.
But note, this approach is ''deprecated with GTK-4'', since,
generally speaking, GTK moves away from the notion of an
''underlying windowing system'' and relies on the concept
of ''graphic surfaces, textures and paintables'' rather...
Here we face the problem that the buttons in the play control panel
need to be connected to the controller, which sits in the viewer panel.
Obviously a direct connection is not correct, since there could be
several panels, and furthermore the controller should be a service and
addressed by commands via UI-Bus.
But this is an experiment, and we'll have to figure out anyway
how the playback-display-connection works, as one of the next tasks
for the »Playback Vertical Slice«
Thus we'll use the PanelManager to fetch the first viewer panel,
and then forward to the controller calls. With this setup,
the controller logic can be verified by printing to STDOUT.
TODO: we are not yet invoking any XVideo code....
While this is not strictly necessary for this experiment,
this is something we should try to establish early:
A »play control« should be handled as an independent UI element,
without tying it logically with some viewer (or timeline); the reason is
that such a play control needs a set of very well designed keyboard bindings,
and thus we will attempt use a focus concept to link to some active viewer
instead of creating one primary viewer, which gets the benefit of the
well accessible keybindings.
Basically we want to create an explicit association between
- a timeline
- some viewer
- a play-control
Introducing a new kind of panel shows again that the `PanelManager`
needs a rework; everything there is way too much ''hard wired''
And the new panel with the play control needs an **Icon** — which is
a challenge in itself; my proposal here is to build on the film metaphor,
and combine the symbol of "Play / Pause" with an stylised film or tape player
(with the secondary idea that this icon also somewhat looks like a owl face)
- place a `DemoController` instance as direct member into the `ViewerPanel`
- create a direct wiring, so that the `DemoController` can push to the `VideoDisplayWidget`
- make the `DemoController` directly instantiate a `TickService` and `DummyImageGenerator`
- reimplement play control functions by direct invocation
- add a new class to the Lumiera CSS stylesheet
- initial assessment shows that the Design of the **Displayer** framework is adequate
- for context: this code originates from the »Kino« video editor 20 years ago
- notably the `XvDisplayer` contains almost no GTK(2)-code
- so it seems feasible to attempt a port to GTK-3
This is a limited research project, and the setup shall be based mostly on existing code.
In the early stage of the Lumiera application, we did some architecture studies
regarding ongoing video generation and display, resulting in a `DemoVideoPlayer`.
This code was broken by the GTK-3 transition, but kept in-tree for later referral.
For this research project, we can mostly short-circuit all of the layer separation
and service communication stuff and build a minimal invocation directly hooked-up
behind the GUI widget. In preparation for such a setup, the existing
demo player code is partially forked by this changeset, pushing aside
the (defunct) !DummyPlayer pseudo-subsystem.
...to be more compliant to the »Lumiera Forward Iterator« concept.
This can be easily achieved by inheriting from util::RegexSearchIter,
similar to the example in CSV.hpp
Regarding #896, I changed the string rendering to use fs::path::generic_string
where appropriate, which means that we're using the normalised path rendering
Since C++17 we can use the std::filesystem instead (and we ''do use it'' indeed)
- relocate the `/lib/file.hpp` header
- adapt the self-discovery of the executable to using std::filesystem
Furthermore, some recherche regarding XVideo and Video Output
- remove obsolete configuration settings
- walk through all settings according to the documentation
https://www.doxygen.nl/manual/config.html
- now try to use the new feature to rely on Clang for C++ parsing
- walk through the doxygen-warnings.txt and fix some obvious misspellings
and structural problems in the documentation comments.
With Debian-Trixie, we are now using Doxygen 1.9.8 —
which produces massively better results in various fine points.
However, there are still problems with automatic cross links,
especially from implementation to the corresponding test classes.
- with Debian 12/13, the top-level `/bin`, `/sbin` and `/lib`
are collapsed into `/usr`. Seemingly this has prompted changes
to the way the shell prints some error messages. This broke
the expectation of some test of the test-framework itself.
- SCons always had the policy to ''sanitise'' the invocation environment,
to prevent unintended impact of environment settings to the test subject.
Seemingly this now also leads to `$HOME` not being defined.
Our file handling framework however expects to be able to expand "~"
- An old-style cast in the constructor lib::diff::Record(Mutator const&)
is now translated into a static_cast (≙conversion); and since the appropriate
conversion operator is missing on Mutator, the constructor attempts to
create a temporary, by re-invoking the same constructor ⟹ Stackoverflow ↯
- conversion from pointer to bool now counts as ''narrowing conversion''
- constructor names must not include template arguments (enforced with C++20)
- better use std::array for some dummy test code
Several further warnings are due to known obsoleted or questionable constructs
and were left as-is (e.g. for ScopedHolder) or just commented for later referral
This is an advanced diagnostics added (presumably) with GCC-13
and attempts to protect against an insidious side-effect of ''overload resolution''
Basically C++ (like its ancestor C) is oriented towards direct linkage and adds
the OO-style dynamic dispatch (through virtual functions and a VTable)
only as an extension, which must be requested explicitly.
Thus the resolution of ''overloads'' (as opposed to ''overridden'' virtual functions)
always takes precedence and happens within the directly visible scope,
which can cause the compiler to perform an implicit conversion instead of
invoking a different virtual function, which is defined in a base class.
However, this diagnostics seems to be implemented in an overly zealous way:
The compiler warns at the time of the type instantiation, and even in cases
where it is effectively impossible to encounter this dangerous shadowing situation.
See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109740
This leads to several ill-guided warnings in the Lumiera code base, which unfortunately
can only be addressed by disabling this diagnostics for all usages of some header.
The reason is, we often generate chains of template instantiations driven by type lists,
and in such usage pattern, it is not even possible to bring all the other inherited overloads
into scope (with a using `BASE::func` clause), because such a specification would be ambiguous
and result in a real compile error, because even the interface is generated from a chain of mix-in templates
Future C++ versions will no longer generate default copy operations
once any single one was defined explicitly. So the goal is to kind-of
''enforce the rule of five'' (if you define one, define them all).
However, sometimes one of these special operators must be defined for a different reason,
e.g. because it is defined as protected, yet should not be exposed on the public API.
In such cases, any other copy operation which still is valid in the default form
must be declared explicitly ''as defaulted''
Overall this seems to be quite an improvement --
and it highlights (again) some known instances of questionable design,
which are mostly obsoleted and require clean-up anyway, or (as in the case of the
Placements) indicate »placeholder code« where the actual solution still needs to be worked out
Oh this is an interesting one...
GCC now highlights situations with `-Wpessimizing-move`,
where an overly zealous developer attempts to optimise by `std::move`,
which however prevents the compiler from applying the ''Return Value Optimisation''
The latter is mandatory since C++17, and essentially means that a value object
created within a function and then returned (by value) will actually be created
directly in the target location, possibly eliding a whole chain of
delegating value returns.
Thus: if we write `std::move(value)`, we change the returned type into an RValue reference,
and thereby ''force the compiler'' to invoke a move-ctor....
Some pre C++11 features are marked deprecated and will be rejected with C++20
Notably the old marker inferfaces for unary (and binary) functions are no longer needed, since function-like objects can be detected by traits or concepts nowadays
Moreover we can get rid of some boost(bind) usages and use a λ
* need to upgrade our custom packages to current standards
* switch those packages from CDBS to dh
* re-build on Trixie and upgrade the Lumiera DEB-Depot
After these (in detail quite expensive) preparations,
build with Scons and GCC-14 can be started.
Fix some further (basically trivial) compile problems,
uncovered by the improved type checking of modern compilers.
Note: a tremendous amount of warnings (and depreciations) is
also indicated, which will be addressed later....
NodeBase_test demonstrates the building blocks of a Render Node,
and verifies low-level mechanics of those building blocks, which
can be quite technical. At the top of this test however are some
very basic interactions, which serve as an introduction.
__Remark__: renamed the low-level technical dispatch-access
for the parameter-accessors in `TurnoutSystem` to be more obvious,
and added comment (I was confused myself how to use them properly)
This is a crucial feature, discovered only late, while building
an overall integration test: it is quite common for processing functionality
to require both a technical, and an artistic parametrisation. Obviously,
both are configured from quite different sources, and thus we need a way
to pre-configure ''some parameter values,'' while addressing other ones
later by an automation function. Probably there will be further similar
requirements, regarding the combination of automation and fixed
user-provided settings (but I'll leave that for later to settle).
On a technical level, wiring such independent sources of information
can be quite a challenging organisational problem — which however can be
decomposed using ''partial function closure'' (as building a value tuple
can be packaged into a builder function). Thus in the end I was able to
delegate a highly technical problem to an existing generic library function.
* now able to demonstrate close-front, close-back and close-argument
* can also apply the same cases to `std::array`, with input and
output type seamlessly adapted to `std::array`
__Summary__:
* the first part to prepare a binding involves creating a mapped tuple,
with re-ordered elements and some elements replaced by placehoder-markers.
This part **must not use RValue-References** (doing so would be possible
only under very controlled conditions)
* the second part, which transports these mapped-tuple elements into the binder
''could be converted to perfect-forwarding.'' This would require to replace
the `Apply<N>' by a variadic template, delegating to `std::apply` and `std::bind`
With this changeset, I have modernised a lot of typedefs to make them more legible,
and I have introduced perfect-forwarding in the entrance path, up to the point
where the values are passed to `TupleConstructor`.
With these additions, all conceivable cases are basically addressed.
Take this as opportunity to investigate how the existing implementation
transports values into the Binder, where they will be stored as data fields.
Notably the mechanism of the `TupleConstructor` / `ElmMapper` indeed
''essentially requires'' to pass the initialisers ''by-reference'',
because otherwise there would be limitations on possible mappings.
This implies that not much can be done for ''perfect forwarding'' of initialisers,
but at least the `BindToArgument` can be simplified to take the value directly.
...which should ''basically work,'' since `std::array` is ''»tuple-like«'' —
BUT unfortunately it has a quite distinct template signature which does not fit
into the generic scheme of a product type.
Obviously we'd need a partial specialisation, but even re-implementing this
turns out to be damn hard, because there is no way to generate a builder method
with a suitable explicit type signature directly, because such a builder would
need to accept precisely N arguments of same type. This leads to a different
solution approach: we can introduce an ''adapter type'', which will be layered
on top of `std::array` and just expose the proper type signature so that the
existing Implementation can handle the array, relying on the tuple-protocol.
__Note__: this changeset adds a convenient pretty-printer for `std::array`,
based on the same forward-declaration trick employed recently for `lib::Several`.
You need to include 'lib/format-util.hpp' to actually use it.
What emerges here, seems to be a generic helper to handle
partial closure of ''tuple-like'' data records. In any case,
this is highly technical meta-programming code and mandates
extraction into a separate header — simplifying `NodeBuilder`
Likely the most widely used facility, which enters into meta-programming
with type sequences, is our function-signature-detector `_Fun<X>`,
which returns a argument type-sequence.
After adding some bridges for cross-compatibility,
function-arguments are now extracted as a new-style,
''variadic sequence'' without trailing `NullType`.
Doing so required to augment some of the most widely used
sequence-processing helpers to work seamlessly also with the
new-style variadic sequences with a definition variant based
on variadics, which typically will later-on obsolete the original
solution, which at that time needed to be tediously coded as a
series of explicit specialisations for N arguments.
...on top of the parameter-decorating functionality developed thus far.
The idea is to allow in the `NodeBuilder` to supply ''some parameters''
directly, while the remaining parameters will be drawn from automation.
Several years ago, I developed some helpers for partial function closure.
Unfortunately these utils are somewhat limited, and rely on some pre-C++11
constructs, yet seem to be usable for the task at hand, since parameters
are always expected as value objects by definition.
This changeset shows a working proof-of concept for left-closing a
parameter tuple with 5 elements; this turns out to surprisingly difficult
due to the full genericity of the acceptable parameter-aggregates...
seemingly the definition can not be much simplified,
since there is no way around handling several definition flavours
of the processing-functor distinctly.
However, the definitions can be rearranged to be clearer,
the resulting type of the `FeedPrototype` can be deduced from the
builder function, and more stringent assertions can be added
...the idea is to limit the scope of possible changes
and rather directly accept a functor to transform the parameters.
We need then to account for the possible flexibility in processing-functor
arguments, while in fact only two cases must be actually handled.
''This proof-of-concept works in test setup''
It seemed that the integration test will end up as a dull repetition
of already coded stuff, just with more ports and thus more boilerplate;
and so I reconsidered what an actually relevant integration test might encompass
- getting parameters from the invocation
- translating and wiring parameters
- which entails to adapt / partially close a processing function!
Thus — surprise — there is a new feature not yet supported by the `NodeBuilder`,
which would be very likely to be used in many real-world use cases: which is
to adapt the parameter tuple expected by the binding from the library.
Obviously we want this, since many »raw« processing functions will expose a mix
of technical and artistic parameters; and we'd like to ''close'' the technical ones.
Such a feature ''should be implementable,'' based on the already developed
technique with the »cross builder«, which implies to switch the template arguments
from within a builder expression. We already do this very thing for adapting
parameter functor, and thus the main difficulty would be to compose an
adaptor functor to the correct argument of the processing functor...
Which is... (well, it is nasty and technical, yet feasible).
Just wanted to use a helper function to build a source-data node.
However, the resulting node had a corrupted Node-ID spec.
Investigation with the debugger showed that the ID was still valid
while in construction and shows up corrupted after returning from the
helper function.
As it turned out, the reason is related to the de-duplication of ProcID data.
While the de-duplicated strings themselves are ''not'' affected, the corruption
happened by an intermediate instance of ProcID, which was inadvertently created
and bound by-value to the builder-λ. The created Port then picks up a reference
to this temporary, leading to the use-after-free of the string_view obejcts.
Obviously, `ProcID` must not be instantiated other than through the static
front-end `ProcID::describe`. Due to the private constructor, I can not make this
object non-copyable (because then the hash-set would not be allowed to emplace it).
But making it at least move-only will provoke a compiler error whenever binding
to a lambda capture by value, which hopefully helps to pinpoint this
insidious problem in the future...
...while this is not the main objective of this test case,
and another test will focus on invocation with full-fledged
`TestFrame` buffers and hash computation...
...it is still a nice achievement to see that these simple
algebraic operations used for demonstration can actually be
invoked in the whole connected network :-)
Using a Node network with
* two source nodes
* one of them chained up linearly with a filter node
* then on top a mix node to combine both chains
Can now verify the generated port specs and verify proper connections
at node level and at port level
This was a lot of intricate technical work,
and is now verified in-depth, covering all possible cases.
__We can now__
* build Nodes
* verify in detail correct connectivity
* read Node-IDs and processing specifications
* maintain a symbolic spec for the arguments of a Port
(and beyond that, we can also **invoke nodes**, which remains to be formally verified)
An essential goal still to reach is a verification of the `NodeBuilder`'s products
Relying on the low-level diagnostic facilities pioneered last days,
it should now be possible to define simple and readable connectivity-clauses,
allowing to build some connected nodes and then verify the connections explicitly.
Handling of extended attributes in conjunction with the hash
turns out to be a rather complicated topic, with some tricky fine details.
And, most important, at the moment I am lacking the proper perspective
to address it and find adequate solutions. Luckily, the cache-key is
not required at the moment, ''and so this topic will be postponed''
As a minimum to complete the diagnostics functions, it is sufficient to set
the appropriate flags in the `ProcID` directly -- and to add some convenience wrappers.
...especially the extended attributes remain somewhat nebulous,
since non of the prospective usages are close to being implemented right now.
It seems, we'll get two distinct sources at construction time of the Node
* additional qualifiers from the Library plug-in
* internal flags or qualifiers provided by the `NodeBuilder`
Another related concern seems to be generation of cache-keys,
which however will ''consume'' the proc-hash generated by the ProcID,
but not change the ID itself; cache-key generation is a tricky subject
and was somewhat overlooked regarding the connection to the `BufferProvider`.
Opened a new ticket #1292 as reminder for this issue.
...exploiting the ''backdoor access'' bypassing the VTable,
as made possible by a common congruent storage layout.
This is a first proof-of-concept, but also shows that the demo nodes
in NodeMeta_test are wired as expected. What is needed now is to make
this diagnostic access easier to invoke and more bullet-proof, by setting
the proper Attribute bits directly in the `NodeBuilder`