as it turns out, we need to set the property_expand() on the child widget
within Gtk::Expander explicitly, to cause the child to grab and additional
available screen space (which obviously is what we want in case of a
log display with scrollbars)
basically Gtk::Expander will do the trick.
However, resizing of the enclosing panel is not handled properly,
the log does not expand to take up the available space, as it did
automaticall when just added directly into the frame
no need to define a private function on Wizard anymore, it just forwards the call
to the service actually implementing the view allocation. For now this is the
PanelLocator (and eventually this will be the ViewLocator / ViewSpecDSL)
PanelLocator is a sub component of the WindowLocator (top-level GUI service).
Eventually this shall become a mere widget/component access service, with the
actual lookup and allocation logic layered on top through ViewLocator, configurable
via ViewSpec-DSL.
We can not implement the full scheme right now, since we're lacking knowledge
about internals of a typical Lumiera UI widget
This is only a premature hack, since the whole structure of PanelManager is somewhat broken.
Moreover, the ViewLocator is not really ready for use yet, so this hack at least
allows us to "reach into" a top-level window and "grab" the pannel we need.
* have a dedicated "information hub" controller, which acts a receiver of "error log messages" on the UI-Bus
* let that controller in turn allocate an apropriate view on demand
The goal is to build a (in itself completely meaningless) ping-pong interaction
between the UI and Proc-Layer, for the purpose of driving the integration ahead.
The immediate challenge is how to create and place an apropriate "GuiComponentView",
i.e. a Tangible, which is connected to the UI-Bus with an predictable EntryID.
And the problem is to get that settled right now, without building the envisioned
generic framework for View allocation in the UI. When this is achieved,
it should be a rather small step to actually send those notifications over
the UI-Bus, which is basically implemented and ready by now.
right now this will just end up in the log, since not even the
notification display is implemented beyond the GuiNotification-facade.
Anyway, we get some kind of communication now for real, in the actual application
...because due of #211, we usually don't execute commands yet.
For now there is only the backdoor to prefix the command-ID with "test"
With this change, the TODO message appears now immediately after GUI start!
In the end, I decided against building a generic service here,
since it pretty much looks like a one-time problem.
Preferrably UI content will be pushed or pulled on demand,
rather than actively coding content from within the UI-Layer
- activation signal is a facility offered and used solely by Gtk::Application
- we do not need nor want an Gtk::Application, we deal with our own application
concerns as we see fit.
Gio::Application holds a signal_activation(), which seems to be used for
precisely that task we need here: to do something right after the UI is operative
...and while doing so, also re-check the state of the GTK toolkit initialisation.
Looks like we're still future-proof, while cunningly avoiding all this
Gnome-style "Application" blurb
I will abandon work on the ViewSpec DSL in current shape (everything fine with that)
and instead work on a general UI start-up and content population sequence.
From there, my intention is to return to the docks, the placement of views
and then finally to the TimelineView
This finishes the first round of design drafts in this area.
Right now it seems difficult to get any further, since most of
the actual view creation and management in the UI is not yet coded.
looks like I'm trapped with the choice between a convoluted API design
and an braindead and inefficient implementation. I am leaning towards the latter
looks like we're hitting a design mismatch here....
...and unfortunately I have to abandon this task now and concentrate
on preparation of my talk at LAC.2018 in June
this is a (hopefully just temporary) workaround to deal with static initialisation
ordering problems. The original solution was cleaner from a code readability viewpoint,
however, when lib::Depend was used from static initialisation code, it could
be observed that the factory constructor was invoked after first use.
And while this did not interfer with the instance lifecycle management itself,
because the zero-initialisation of the instance (atomic) pointer did happen
beforehand, it would discard any special factory functions installed from such
a context (and this counts as bug for my taste).
The original goal for #1129 (ViewSpecDSL_test) is impossible to accomplish,
at least within our existing test framework. Thus I'll limit myself to coding
a clean-room integration test with purely synthetic DSL definitions and mock widgets
Problem is, we can not even compile the conversion in the "other branch".
Thus we need to find some way to pick the suitable branch at compile time.
Quite similar to the solution found for binding Rec<GenNode> onto a typed Tuple
Attempt to find my way back to the point
where the digression regarding dependency-injection started.
As it turns out, this was a valuable digression, since we can rid ourselves
from lots of ad-hoc functionality, which basically does in a shitty way
what DependencyFactory now provides as standard solution
FIRST STEP is to expose the Navigator as generic "LocationQuery" service
through lib::Depend<LocationQuery>
more of a layout improvement, to avoid any code duplication.
The mechanics remain the same
- write an explicit specialisation
- trigger template intantiation within a dedicated translation unit
from now on, we'll have dedicated individual translation units (*cpp)
for each distinct interface proxy. All of these will include the
interfaceproxy.hpp, which now holds the boilerplate part of the code
and *must not be included* in anything else than interfac proxy
translation units. The reason is, we now *definie* (with external linkage)
implementations of the facade::Link ctor and dtor for each distinct
type of interface proxy. This allows to decouple the proxy definition code
from the service implementation code (which is crucial for plug-ins
like the GUI)
The recently rewritten lib::Depend front-end for service dependencies,
together with the configuration as lib::DependInject::ServiceInstance
provides all the necessary features and is even threadsafe.
Beyond that, the expectation is that also the instantiation of the
interface proxies can be simplified. The proxies themselves however
need to be hand-written as before
I am fully aware this change has some far reaching ramifications.
Effectively I am hereby abandoning the goal of a highly modularised Lumiera,
where every major component is mapped over the Interface-System. This was
always a goal I accepted only reluctantly, and my now years of experience
confirm my reservation: it will cost us lots of efforts just for the
sake of being "sexy".
Actually this is on the implementation side only.
Since Layer-Separation-Interfaces route each call through a binding layer,
we get two Service-"Instances" to manage
- on the client side we have to route into the Lumiera Interface system
- on the implementation side the C-Language calls from the Interface system
need to get to the actual service implementation. The latter is now
managed and exposed via DependInject::ServiceInstance
...still using the FAKE implementation, not a real rules engine.
However, with the new Dependency-Injection framework we need to define
the actual class from the service-provider, not from some service-client.
This is more orthogonal, but we're forced to install a Lifecycle-Hook now,
in order to get this configuration into the system prior to any use
This is borderline yet acceptable;
A service might indeed depend on itself circularly
The concrete example is the Advice-System, which needs to push
the clean-up of AdviceProvicions into a static context. From there
the deleters need to call back into the AdviceSystem, since they have
no wey to find out, if this is an individual Advice being retracted,
or a mass-cleanup due to system shutdown.
Thus the DependencyFactory now invokes the actual deleter
prior to setting the instance-Ptr to NULL.
This sidesteps the whole issue with the ClassLock, which actually
must be already destroyed at that point, according to the C++ standard.
(since it was created on-demand, on first actual usage, *after* the
DependencyFactory was statically initialised). A workaround would be
to have the ctor of DependencyFactory actively pull and allocate the
Monitor for the ClassLock; however this seems a bit overingeneered
to deal with such a borderline issue
Static initialisation and shutdown can be intricate; but in fact they
work quite precise and deterministic, once you understand the rules
of the game.
In the actual case at hand the ClassLock was already destroyed, and
it must be destroyed at that point, according to the standard. Simply
because it is created on-demand, *after* the initialisation of the
static DependencyFactory, which uses this lock, and so its destructor
must be called befor the dtor of DependencyFactory -- which is precisely
what happens.
So there is no need to establish a special secure "base runtime system",
and this whole idea is ill-guided. I'll thus close ticket #1133 as wontfix
Conflicts:
src/lib/dependable-base.hpp
When some dependency or singleton violates Lumiera's policy regarding destructors and shutdown,
we are unable to detect this violation reliably and produce a Fatal Error message.
This is due to lib::Depend's de-initialisating being itself tied to template generated
static variables, which unfortunately have a visibility scope beyond the translation unit
responsible for construction and clean-up.
- state-of-the-art implementation of access with Double Checked Locking + Atomics
- improved design for configuration of dependencies. Now at the provider, not the consumer
- support for exposing services with a lifecycle through the lib::Depend<SRV> front-end
...which declare DependencyFactory as friend.
Yes, we want to encourrage that usage pattern.
Problem is, std::is_constructible<X> gives a misleading result in that case.
We need to do the instantiation check within the scope of DependencyFactory
ideally we want
- just a plain unique_ptr
- but with custom deleter delegating to lib::Depend
- Depend can be made fried to support private ctor/dtor
- reset the instance-ptr on deletion
- always kill any instance
all these tests are ported by drop-in replacement
and should work afterwards exactly as before (and they do indeed)
A minor twist was spotted though (nice to have more unit tests indeed!):
Sometimes we want to pass a custom constructor *not* as modern-style lambda,
but rather as direct function reference, function pointer or even member
function pointer. However, we can not store those types into the closure
for later lazy invocation. This is basically the same twist I run into
yesterday, when modernising the thread-wrapper. And the solution is
similar. Our traits class _Fun<FUN> has a new typedef Functor
with a suitable functor type to be instantiated and copied. In case of
the Lambda this is the (anonymous) lamda class itself, but in case of
a function reference or pointer it is a std::function.
...which showed up under high system load.
The initialisation of the member variables for the check sum
could be delayed while the corresponding thread was already running
- polish the text in the TiddlyWiki
- integrate some new pages in the published documentation
Still mostly placeholder text with some indications
- fill in the relevant sections in the overview document
- adjust, expand and update the Doxygen comments
TODO: could convert the TiddlyWiki page to Asciidoc and
publish it mostly as-is. Especially the nice benchmarks
from yesterday :-D
This solution is considered correct by the experts.
Regarding the dependency-configuration part, we do not care too much about performance
and use the somewhat slower default memory ordering constraint
...written as byproduct from the reimplementation draft.
NOTE there is a quite similar test from 2013, DependencyFactory_test
For now I prefer to retain both, since the old one should just continue
to work with minor API adjustments (and thus prove this rewrite is a
drop-in replacement).
On the long run those two tests could be merged eventually...
This is a complete makeover of our lib::Depend and lib::DependencyFactory templates.
While retaining the basic idea, the configuration has been completely rewritten
to favour configuration at the point where a service is provided rather,
than at the point where a dependency is used.
Note: we use differently named headers, so the entire Lumiera
code base still uses the old implementation. Next step will be
to switch the tests (which should be drop-in)
explicit friendship seems adequate here
DependInject<SRV> becomes more or less a hidden part of Depend<SRV>,
but I prefer to bundle all those quite technical details in a separate
header, and close to the usage
This is a tricky problem an an immediate consequence of the dynamic configuration
favoured by this design. We avoid a centralised configuration and thus there
are no automatic rules to enforce consistency. It would thus be possible
to start using a dependency in singleton style, but to switch to service
style later, after the fact.
An attempt was made to prevent such a mismatch by static initialisiation;
basically the presence of any Depend<SRV>::ServiceInstance<X> would disable
any usage of Depend<SRV> in singleton style. However, such a mechanism
was found to be fragile at best. It seems more apropriate just to fail
when establishing a ServiceInstance on a dependency already actively in
use (and to lock usage after destroying the ServiceInstance).
This issue is considered rather an architectural one, which can not be
solved by any mechanism at implementation level ever
up to now we used placement into a static buffer.
While this approach is somewhat cool, I can't see much practical benefit anymore,
given that we use an elaborate framework which rules out the use of Meyers Singleton.
And given that with C++11 we're able just to use std::unique_ptr to do all work.
Moreover, the intended configurability will become much simpler by relying
on a _closure_ to produce a heap-allocated instance for all cases likewise.
The only possible problem I can see is that critical infrastructure might
rely on failsafe creation of some singleton. Up to now this scenario
remains theoretical however
Meyers Singleton is elegant and fast and considered the default solution
However...
- we want an "instance" pointer that can be rebound and reset,
and thus we are forced to use an explicit Mutex and an atomic variable.
And the situation is such that the optimiser can not detect/verify this usage
and thus generates a spurious additional lock for Meyers Singleton
- we want the option to destroy our singletons explicitly
- we need to create an abstracted closure for the ctor invocation
- we need a compiletime-branch to exclude code generation for invoking
the ctor of an abstract baseclass or interface
All those points would be somehow manageable, but would counterfeit the
simplicity of Meyers Singleton
Problems:
- using Meyers Singleton plus a ClassLock;
This is wasteful, since the compiler will emit additional synchronisation
and will likely not be able to detect the presence of our explicit locking guard
- what happens if the Meyers Singleton can not even be instantiated, e.g. for
an abstract baseclass? We are required to install an explicit subclass configuration
in that case, but the compiler is not able to see this will happen, when just
compiling the lib::Depend
Most dependencies within Lumiera are singletons and this approach remains adequate.
Singletons are not "EVIL" per se. But in some cases, there is an explicit
lifecycle, managed by some subsystem. E.g. some GUI services are only available
while the GTK event loop is running.
This special case can be integrated transparently into our lib::Depend<TY> front-end,
which defaults to creating a singleton otherwise.
we'll use a typedef to represent the default case
and provide the level within the UI-Tree as template parameter for the generic case
This avoids wrapping each definition into a builder function, which will be
the same function for 99% of the cases, and it looks rather compact and natural
for the default case, while still retaining genericity.
Another alternative would have been to inject the Tree-level at the invocation;
but doing so feels more like magic for me.
decided to add a very specific preprocessing here, to make the DSL notation more natural.
My guess is that most people won't spot the presence of this tiny bit of magic,
and it would be way more surprising to have rules like
UICoord::currentWindow().panel("viewer").create()
fail in most cases, simply because there is a wildcard on the perspective
and the panel viewer does not (yet) exist. In such a case, we now turn the
perspective into a "existential quantified" wildcard, which is treated as if
the actually existing element was written explicitly into the pattern.
...actually just more test coverage,
the feature is already implemented.
What *could* be done though is to inject that UIC_ELIDED marker
on missing perspective specs in create clauses automatically...
This looks like YAGNI, and it would be non trivial to implement.
But since the feature looks important for slick UI behaviour,
I've made a new ticket and leave it for now
with the exception of some special situations,
which require additional features from the engine,
especially binding-on-context
Not sure though if I'll implement these or say YAGNI
turns out to be somewhat tricky.
The easy shot would be to use the comma operator,
but I don't like that idea, since in logic programming, comma means "and then".
So I prefer an || operator, similar to short-circuit evaluation of boolean OR
Unfortunately, OR binds stronger than assignment, so we need to trick our way
into a smooth DSL syntax by wrapping into intermediary marker types, and accept
rvalue references only, as additional safeguard to enforce the intended inline
definition syntax typical for DSL usage.
seems to be the most orthogonal way to strip adornments from the SIG type
Moreover, we want to move the functor into the closure, where it will be stored anyay.
From there on, we can pass as const& into the binder (for creating the partially closed functor)
...as it turned out, the result type was the problem: the lambda we provide
typically does not yield an Allocator, but only its baseclass function<UICoord(UICoord)>
solution: make Allocator a typedef, we don't expect any further functionality
...but not yet able to get it to compile.
Problem seems to be the generic lambda, which is itself a template.
Thus we need a way to instantiate that template with the correct arguments
prior to binding it into a std::function
been there, seen that recently (-> TreeExplorer, the Expander had a similar problem)
...this was quite an extensive digression, which basically gave us
a solid foundation for topological addressing and pattern matching
within the "interface space"
rationale: sometimes (likely this is even the standard case) we do not just
want to "extend", rather we want to extent at very specific levels.
This is easy to implement, based on the existing building blocks for path manipulation
the original construction works only as long as we stick to the "classical" Builder syntax,
i.e. use chained calls of the builder functions. But as soon as we just invoke
some builder function for sake of the side-effect on the data within the builder,
this data is destroyed and moved out into the value return type, which unfortunately
is being thrown away right afterwards.
Thus: either make a builder really sideeffect-free, i.e. do each mutation
on a new copy (which is kind of inefficient and counterfeits the whole idea)
or just accept the side-effect and return only a reference.
In this case, we can still return a rvalue-Reference, since at the end
we want to move the product of the build process out into the destination.
This works only due to the C++ concept of sequence points, which ensures
the original object stays alive during the whole evaluation of such a chained
builder expression.
NOTE: the TreeMutator (in namespace lib::diff) also uses a similar Builder construction,
but in *that* case we really build a new product in each step and thus *must*
return a value object, otherwise the reference would already be dangling the
moment we leave the builder function.
- the default should be to look for total coverage
- the predicates should reflect the actual state of the path only
- the 'canXXX' predicates test for possible covering mutation
I set out to "discover" what operations we actually need on the LocationQuery
interface, in order to build a "coordinate resolver" on top. It seems like
this set of operations is clear by now.
It comes somewhat as a surprise that this API is so small. This became possible
through the idea of a ''child iterator'' with the additional ability to delve down and
expand one level of children of the current element. Such can be ''implemented''
by relying on techniques similar to the "Monads" from functional programming.
Let's see if this was a good choice. The price to pay is a high level of ''formal precision''
when dealing with the abstraction barrier. We need to stick strictly to the notion of a
''logical path'' into a tree-like topology, and we need to be strong enough never to
give in and indulge with "the concrete, tangible". The concrete reality of a tree
processing algorithm with memory management plus backtracking is just to complex
to be handled mentally. So either stick to the rules or get lost.
yet some more trickery to get around this design problem.
I just do not want to rework IterSource right now, since this will be
a major change and require more careful consideration.
Thus introduce a workaround and mark it as future work
Using this implementation, "child expansion" should now be possible.
But we do not cover this directly in Unit test yet
...but not yet switched into the main LocationQuery interface,
because that would also break the existing implementation;
recasting this implementation is the next step to do....
...which basically allows us to return any suitable implementation
for the child iterator, even to switch the concrete iteration on each level.
We need this flexibility when implementing navigation through a concrete UI
This is just a temporary solution, until IterSource is properly refactored (#1125)
After that, IterSource is /basically a state core/ and the adaptor will be more or less trivial
- as it stands currently, IterSource has a design problem, (see #1125)
- and due to common problems in C++ with mix-ins and extended super interfaces,
it is surprisingly tricky to build on an extension of IterSource
- thus the idea is to draft a new solution "in green field"
by allowing TreeExplorer to adapt IterSource automatically
- the new sholution should be templated on the concrete sub interface
and ideally even resolve the mix-in-problem by re-linearising the
inheritance line, i.e. replace WrappedLumieraIter by something
able to wrap its source, in a similar vein as TreeExplorer does
...yet I do not want to move all of the traits over into the
publicly visible lib::iter_explorer namespace -- I'm quite happy
with these traits being clearly marked as local internal details
NOTE it just type checks right now,
but since meta programming is functional programming, this means
with >90% probability that it might actually work this way....
...which also happens to include sibling and child iteration;
this is an attempt to reconcile the inner contradictions of the design
(we need both absolute flexibility for the type of each child level iterator
yet we want just a single, generic iterator front-end)
...this was a difficult piece of consideration and analysis.
In the end I've settled down on a compromise solution,
with the potential to be extended into the right direction eventually...
surprise: the standard for-Loop causes a copy of the iterator.
From a logical POV this is correct, since the iterator is named,
it can not just be moved into the loop construct and be consumed.
Thus: write a plain old-fashioned for loop and consume the damn thing.
So the top-level call into util::join(&&) decides, if we copy or consume
several extensions and convenience features are conceivable,
but I'll postpone all of them for later, when actual need arises
Note especially there is one recurring design challenge, when creating
such a demand-driven tree evaluation: more often than not it turns out
that "downstream" will need some information about the nested tree structure,
even while, on the surfice, it looks as if the evaluation could be working
completely "linearised". Often, such a need arises from diagnostic features,
and sometimes we want to invoke another API, which in turn could benefit
from knowing something about the original tree structure, even if just
abstracted.
I have no real solution for this problem, but implementing this pipeline builder
leads to a pragmatic workaround: since the iterator already exposes a expandChildren(),
it may as well expose a depth() call, even while keeping anything beyond that
opaque. This is not the clean solution you'd like, but it comes without any
overhead and does not really break the abstraction.
...so sad.
The existing implementation was way more elegant,
just it discarded an exahusted parent element right while in expansion,
so effectively the child sequence took its place. Resolved that by
decomposing the iterNext() operation. And to keep it still readable,
I make the invariant of this class explicit and check it (which
caught yet another undsicovered bug. Yay!)
instead of building a very specific collaboration,
rather just pass the tree depth information over the extended iterator API.
This way, "downstream" clients *can* possibly react on nested scope exploration
We get conflicting goals here:
- either the child expansion happens within the opaque source data
and is thus abstracted away
- or the actual algorithm evaluation becomes aware of the tree structure
and is thus able to work with nested evaluation contexts and a local stack
...build on top of the core features of TreeExplorer
- completely encapsulate and abstract the source data structure
- build an backtracking evaluation based on layered evaluation
of this abstracted expandable data source
NOTE: test passes compilation, but doesn't work yet
...and there is a point where to stop with the mere technicalities,
and return to a design in accordance with the inner nature of things.
Monads are a mere technology, without explicatory power as a concept or pattern
For that reason
- discard the second expansion pattern implemented yesterday,
since it just raises the complexity level for no given reason
- write a summary of my findings while investigating the abilities
of Monads during this design excercise.
- the goal remains to abandon IterExplorer and use the now complete
IterTreeEplorer in its place. Which also defines roughly the extent
to wich monadic techniques can be useful for real world applications
...it can sensibly only be done within the Expander itself.
Question: is this nice-to-have-feature worth the additional complexity
of essentially loading two quite distinct code paths into a single
implementation object?
As it stands, this looks totally confusing to me...
At that time, our home-made Tuple type was replaced by std::tuple,
and then the command framework was extended to also allow command invocation
with arguments packaged as lib::diff::Record<GenNode>
With changeset 0e10ef09ec
A rebinding from std::tuple<ARGS...> to Types<ARGS> was introduced,
but unfortunately this was patched-in on top of the existing Types<ARGS...>
just as a partial specialisation.
Doing it this way is especially silly, since now this rebinding also kicks
in when std::tuple appears as regular payload type within Types<....>
This is what happened here: We have a Lambda taking a std::tuple<int, int>
as argument, yet when extracting the argument type, this rebinding kicks in
and transforms this argument into Types<int, int>
Oh well.
this leads to either unfolding the full tree depth-first,
or, when expanding eagerly, to delve into each sub-branch down to the leaf nodes
Both patterns should be simple to implement on top of what we've built already...
IterSource should be refactored to have an iteration control API similar to IterStateWrapper.
This would resolve the need to pass that pos-pointer over the abstraction barrier,
which is the root cause for all the problems and complexities incurred here
...but for now the price is that we need to punch a hole into IterAdapter.
And obviously, this is all way to tangled and complex on implementation level.
turns out that -- again -- we miss some kind of refresh after expanding children.
But this case is more tricky; it indicates a design mismatch in IterSource:
we (ab)use the pos-pointer to communicate iteration state. While this might be
a clever trick for iterating a real container, it is more than dangerous when
applied to an opaque source state as in this case. After expanding children,
the pos-pointer still points into the cache buffer of the last transformer.
In fact, we miss an actualisation call, but the IterSource interface does not
support such a call (since it tries to get away with state hidden in the pos pointer)
as it turned out, when "inheriting" ctors, C++14 removes the base classes' copy ctors.
C++17 will rectify that. Thus for now we need to define explicitly that
we'll accept the base for initialising the derived. But we need do so
only on one location, namely the most down in the chain.
Since this now requires to import iter-adapter-stl.hpp and iter-source.hpp
at the same time, I decided to drop the convenience imports of the STL adapters
into namespace lib. There is no reason to prefer the IterSource-based adapters
over the iter-adapter-stl.hpp variants of the same functionality.
Thus better always import them explicitly at usage site.
...actual implementation of the planned IterSource packaging is only stubbed.
But I needed to redeclare a lot of ctors, which doesn't seem logical
And I get a bad function invocation from another test case which worked correct beforehand.