actually I do not know much regarding the actual situation when,
within the Builder run, we're able to detect a change and generate
a diff description. However, as a first step, I'll pick IterSrouce
as a base interface and use a "generation context", which is to be
passed by shared-ptr
the (trivial) implementation turned out to be correct as written,
but it was (again) damn challenging to get the mulithreaded chaotic
test fixture and especially the lambda captures to work correct.
- concept for a first preliminary implementation of dispatch into the UI thread
- define an integration effort to build a complete working communication chain
This change was caused by investigation of UI event loop dispatch;
since the GTK UI is designed to run single threaded, any invocation
from other threads need to be diepatched explicitly.
A possible way to achieve this is to use Glib::Dispatcher, which
in turn requires that the current thread (which is in this case the UI thread)
already holds a Glib::MainContext
This prompted me to create a tight link between the external facade interfaces
of the UI and the event loop itself. What remains to be settled is how
to hand over arguments to the action in the main loop
After investigation of current GTK and GIO code, I came to the conclusion
that we do *not* want to rely on the shiny new Gtk::Application, which
provides a lot of additional "convenience" functionality we do neither
need nor want. Most notably, we do not want extended desktop integration
like automatically connecting to D-Bus or exposing application actions
as desktop events.
After stripping away all those optional functions and extensions, it turns
out the basic code to operate the GTK main event loop is quite simple.
This changeset extracts this code from the (deprecated) Gtk::Main and
integrates it directly in Lumiera's UI framework object (UiManager).
this is just a tiny change to make things more othogonal.
Now the unwinding and calls to any GTK / Widget dtors happen *after*
emitting the term signal from UI shutdown. Which means, the other subsystems
are shutting down (in their dedicated threads) as well, thus lowering
the probability of some action still using the UI and triggering an exception
as it turned out, the former functionality was deactivated in 2009
with changeset 6151415
The whole concept seems to be unfinished, and needs to be reworked
and integrated with "Views and Perspectives" (whatever that is...)
See also #1097
Gtk::Main is deprecated, but the new solution, instantiating a
Gtk::Application object does not match our use case, since we handle
all application concerns already and just need a Gtk main loop to run.
Anyway, it became clear that the "main object" will be the new UiManager.
As a first step, I've now moved the (deprecated) Gtk::Main object
down there. Next step (planned) will be to inherit from Gio::Application
and clone some functionality from Gtk::Application
...which opens more questions than it solves at the moment.
Especially note #1096, the question how to refer to object-IDs
Maybe we need to enable sending EntryIDs via GenNode?
Anyway, the magic spell is broken now: we have a way how to
establish commands and how to issue them from the UI, with full integration
of UI-Bus, layer separation facade, instance management and ProcDispatcher
Looks like a stepping stone
after extended analysis, it turned out to be a "placeholder concept"
and introduces an indirection, which can be removed altogether
- simple command invocation happens at gui::model::Tangible
- it is based on the command (definition) ID
- instance management happens automatically and transparently
- the extended case of context-bound commands will be treated later,
and is entirely self-contained
while the initial design treated the commands in a strictly top-down manner,
where the ID is known solely to the CommandRegistry, this change and information
duplication became necessary now, since by default we now always enqueue and
dispatch anonymous clone copies from the original command definition (prototype).
This implementation uses the trick to tag this command-ID when a command-hanlde
is activated, which is also the moment when it is tracked in the registry.
in accordance to the design changes concluded yesterday.
- in the standard cases we now check the global registry first
- automatically create anonymous clone copy from global commands
- reorganise code internally to use common tail implementation
as it turns out, we can always trigger commands right away,
the moment all arguments are known. Thus it is sufficient to
send a single argument binding message, which allows us to
get rid of a lot or ugly complexities (payload visitor).
It seems more adequate to push the somewhat intricate mechanics
for the "fall back" onto generic commands down into the implementation
level of CommandInstanceManager. The point is, we know the standard
usage situation is to rely on the instance manager, and thus we want
to avoid redundant table lookups, only to support the rare case of
fallback to global commands. The latter is currently used only from
unit-tests, but might in future also be used by scripts.
Due to thread safety considerations, I have refrained from handing
out a direct reference to the command token sitting in the registry,
even while not doing so incurs a small runtime penalty (accessing
the shared ref-count for creating a copy of the smart-handle).
This is the typical situation where you'd be tempted to sacrifice
sanity for the sake of an imaginary performance benefit, which
in fact is dwarfed by all the machinery of UI-Bus and argument
passing via GenNode.
but I am not happy with the implementation yet: the maybeGet just
doesn't feel right. Likely it will be a better idea to push that
fallback mechanism generally down into the CommandInstanceManager?
just by reasoning from the concept, an instance should always correspond
to a single invocation trail. Having several sets of invocation state
compete with each other, means to keep them distinct, otherwise the
implicit state is going to be corrupted
This changeset fixes a huge pile of problems, as indicated in the
error log of the Doxygen run after merging all the recent Doxygen improvements
unfortunately, auto-linking does still not work at various places.
There is no clear indication what might be the problem.
Possibly the rather unstable Sqlite support in this Doxygen version
is the cause. Anyway, needs to be investigated further.
this is indeed a change of concept.
A 'command instance' can not be found through the official
Command front-end anymore, since we do not create a registration.
This allows us to avoid decorating command IDs with running counters
interesting new twist: we do not even need to decorate with a running number,
since we'll get away with an anonymous command instance, thanks to Command
being a smart-handle
this is a prerequisite for command instance management:
We have now an (almost) complete framework for writing actual
command definitions in practice, which will be registered automatically.
This could be complemented (future work) by a script in the build process
to regenerate proc/cmd.hpp based on the IDs of those automatic definitions.
The point in question is how to manage these definitions in practice,
since we're about to create a huge lot of them eventually. The solution
attempted here is heavily inspired by the boost-test framework
...because this topic serves as a vehicle to elaborate various core concepts
of the UI backbone, especially how to access, bind and invoke Proc-Layer commands
...turns out to be a nasty subject, now we're able to see
in more concrete detail how this interaction needs to be carried out.
Basically this is a blocker for the top-level, since it is obviously
some service in top-level, which ultimately becomes responsible for
orchestrating this activity
this pretty much resolves most of the uncertainities:
we now get a set of mutually dependent services, each of which
is aware of each other member's capabilities, but accesses those
only through this partner's API
After quite some pondering, it occured to me that we both
- need some top-level model::Tangible to correspond to the RootMO in the session
- need some Controller to handle globally relevant actions
- need a way to link action invocation to transient interaction state (like focus)
This leads to the introduction of a new top-level controller, which is better
suited to fill that role than the depreacted model-controller or the demoted window-manager
looks like we're in management business here ;-)
we chop off heads, slaughter the holy cows and then install -- a new manager
...allows us to get rid of a lot of sigc boilerplate syntax.
The downside is that the resulting functors are not sigc::trackable.
This seems adequate here, since the whole top-level UI backbone is
maintained by GtkLumiera, and thus ensured to exist as long as the
main GTK event loop is running.
WARNING: beware of creating "wild" background thrads in the UI, without
proper scheduling of any communication via the event loop!
This is a very pervasive change and basically turns the whole top-level
of the GTK-UI bottom-up. If this change turns out right, it would likely
solve #1048
WARNING: in parts not implemented, breaks UI
...which itself is obsolete and needs to be redesigned from scratch.
For now we create a local instance of this obsolete PlaybackController
in each viewer panel and we use a static accessor function to just some
instance. Which would break if we start playback with multiple viewer
panels. But we can't anyway, since the Player itself is also a broken
leftover from an obsoleted design study from the early days.
so why care...
- WindowList (ex WindowManager)
- Project & Controller
the latter ones are defunct and can be replicated down into each
of the old timeline pannel instances. They just serve the purpose
to keep this old code barely functional, so it can be used as reference
for building the new timeline
There seems to be a mismatch in the arrangement of the top-level entities
* we support multiple windows, yet from reading the code, you'd ge the impression we aren't really aware we have multiple top-level windows
* the `WindowManager` is the core UI manager, which feels like a mix-up in concerns
* the `WorkspaceWindow::createUI()` does the global UI initialisation. Again, we have multiple workspace windows.
* `GtkLumiera::main()` creates a `Model` and a `Controller` in local function scope, but stores the `WindowManager` in an object field.
* it seems, for that very reason, `GtlLumiera` needed to be a singleton, to allow by-name access to "the" `WindowManager`
* needless to say, this causes a host of problems when shutting down the UI.
The idea is to introduce a dedicated UiManager, to deal with the central
framework induced concerns solely, and to demote the WindowManager and the
WorkspaceWindows to care only for their local concerns
in fact it just does not fulfil any of the behavioural properties
of a full-fledged UI-Element. All it needs is an uplink bus connection,
so let's just keep it as that
Sidenote: I've realised today that such a "free standing" BusTerm
without registration in Nexus is a good idea and acceptable solution.
yes, it's a cycle and indeed quite tricky.
Just verified it (again) with the debugger and saw all
dtor calls happening in the expected order. Also the number
of Nexus registration is sane
Now I've realised that there are two degrees of connectedness.
It is very much possible to have a "free standing" BusTerm, which
only allows to send uplink messages. In fact, this is how CoreService
is implemented, and probably it should also the way how to connect
the GuiNotification service...
Reason was some insideous detail regarding Lambdas:
When a Lambda captures context, a *closure* is created.
And while the Lambda itself is generated code, pretty much
like an anonymous function, the closure depends on the context
that was captured. In our case here, the Lambda used to start
the thread was the problem: it captured the termCallback functor
from the argument of the enclosing function. In fact it did not
help or change anything if we successively package that lambda
into a function objet and store this by value, because the
lambda still refers to the transient function context present
on stack at the moment it was captured.
The solution is to revert back to a bind expression, since this
creates a dedicated storage for the bound function arguments
managed within the bind-functor. This makes us independent
from the call context
...because some Bus connections stem from elements which are
member of CoreService, thus the'll still be connected when the
sanity check in the dtor runs
But even with this fix, we still get a SEGFAULT
TODO
- is this actually a sensible idea, from a design viewpoint?
- in which way to bind GuiNotification for receiving diff messages?
- Problem with disconnnecting from Nexus on shutdown
Writing and debugging such tests is always an interesting challenge...
Fortunately this exercise didn't unveil any problem in the newly written
code, only some insidious problems in the test fixture itself. Which
again highlights the necessity, that each *command instance* needs
to be an independent clone from the original *command prototype*,
since argument binding messages and trigger messages can appear
in arbitrary order.
not quite sure how to get the design straight.
Also a bit concerned because we'll get this much indirections;
the approach to send invocations via the UI-Bus needs to prove its viability
Did a full review of state and locking logic, seems airtight now.
- command processing itself is unimplemented, we log a TODO message for now
- likewise, builder is not implemented
- need to add the deadlock safeguard #1054
We found out that it's best to run it single threaded
within the session loop thread. This does not mean the Builder
itself is necessarily single threaded, but the Builder's top level
will block any other session operation, and this is a good thing.
For this reason it makes more sense to have the Builder integrated
as a component into the session subsystem.
after reading some related code, I am leaning towards a design
to mirror the way command messages are sent over the UI-Bus.
Unfortunately this pretty much abandons the possibility to
invoke these operations from a client written in C or any
other hand made language binding. Which pretty much confirms
my initial reservation towards such an excessively open
and generic interface system.
...this means to turn Looper into a state machine.
Yet it seems more feasible, since the DispatcherLoop has a nice
checkpoint after each iteration through the while loop, and we'd
keep that whole builder-dirty business completely confined within
the Looper (with a little help of the DispatcherLoop)
Let's see if the state transition logic can actually be implemented
based just on such a checkpoint....?
....if by some weird coincidence, a command dispatched into the session
happens to trigger session shutdown or re-loading, this will cause a deadlock,
since decommissioning of session data structures must wait for the
ProcDispatcher to disable command processing -- and this will obviously
never happen when in a callstack below some command execution!
After some consideration, it became clear that this service implementation
is closely tied to the DispatcherLoop -- which will consequently be
responsible to run and expose this service implementation
need to keep state variables on both levels,
since the session manager (lifecycle) "opens" the session
for external access by starting the dispatcher; it may well happen
thus that the session starts up, while the *session subsystem*
is not(yet) started
mark TODOs in code to make that happen.
Actually, it is not hard to do so, it just requires to combine
all the existing building blocks. When this is done, we can define
the "Session" subsystem as prerequisite for "GUI" in main.cpp
Unless I've made some (copy-n-paste) mistake with defining the facades,
this should be sufficient to pull up "the Session" and automatically
let the Gui-Plugin connect against the SessionCommandService
up to now this happened from the GuiRunner, which was a rather bad idea
- it can throw and thus interfer with the startup process
- the GuiNotification can not sensibly be *implemented* just backed
by the GuiRunner. While CoreService offers access to the necessary
implementation facilities to do so
so the true reason is an inner contradiction in the design
- I want it to be completely self similar
- but the connection to CoreService does not conform
- and I do not want to hard code CoreService into the Nexus classdefinition
So we treat CoreService as uplink für Nexus and Nexus as uplink for CoreService,
with the obvious consequences that we're f**ed at init and shutdown.
And since I want to retain the overall design, I resort to implement
a short circuit detector, which suppresses circular deregistration calls
Decision was made to use the CoreService as PImpl to organise
all those technical aspects of running the backbone. Thus,
the Nexus (UI-Bus hub) becomes part of CoreService
...problem is, I actually don't know much about what kinds of markers
we'll get, and how we handle them. Thus introducing a marker kind
is just a wild guess, in order to get *any* tangible attribute
this is a tricky problem and a tough decision.
After quite some pondering I choose to enforce mandatory fields
through the ctor, and not to allow myself cheating my way around it
it occurred to me that effectively we abandoned the use of
a business facade and proxy model in the UI. The connection
becomes entirely message based now.
To put that into context, the originally intended architecture
never came to life. The UI development stalled before this could
happen; possibly it was also hampered by the "impedance mismatch"
between our intentions in the core and such a classical, model centric
architecture. Joel several times complained that he felt blocked; but
I did not really understand this issue. Only recently, when I came to
adapting the timeline display to GTK-3, I realised the model centric
approach can not possibly work with such an open model as intended
in our case. It would lead to endless cascades of introspection.
these are just empty class files, but writing a basic description
for each made me flesh out a lot of organisational aspects of what
I am about to build now
...it seemed first that we'd might run into a very fundamental problem;
but after some consideration it turns out the interspersed display manager
and the decoupling between model/presenter and widget happens to mitigate
this problem as well.
the content of the "GuiTimelineWidgetStructure" tiddler is
actually related to architecture questions related to custom widgets
in general, plus working notes regarding an investigation of the
Gtk::Layout widget.
bottom line
- seems we need to do that manually
- must wait until in the on_draw() callback
- use Container::foreach() to visit all child widgets
- Layout::set_size()
- define tasks to be addressed during investigation
- read documentation, identify problematic aspects
- prepare a child widget class to be placed on the canvas
actually this is a pragmatic extension for some special use cases,
and in general rather discurraged, since it contradicts the
established diff semantics. Yet with some precaution, it should
be possible to transport information via an intermediary ETD
Map -> ETD -> Map
...this is the first attempt to integrate the Diff-Framework into (mock) UI code.
Right now there is a conceptual problem with the representation of attributes;
I tend to reject idea of binding to an "attribute map"
the generic typing to DiffMutatble does not make much sense,
since the desired implementation within gui::ctrl::Nexus
is bound to work on Tangibles only, since that is what
the UI-Bus stores in the routing table
at first, this seemed like a good idea, but it caused already
numerous quirks and headache all over the place. And now, with
the intent to switch to the TreeMutator based implementation,
it would be damn hard to retain these features, if at all
possible.
Thus let's ditch those in time and forget about it!
this is a subtle change in the semantics of the diff language,
actually IMHO a change towards the better. It was prompted by the
desire to integrate diff application onto GenNode-trees into the
implementation framework based on TreeMutator, and do away with
the dedicated implementation.
Now it is a matter of the *selector* to decide if a given layer
is responsible for "attributes". If so, then *all* elements within
this layer count as "attribute" and an after(Ref::ATTRIBS) verb
will fast forward behind *the end of this layer*
Note that the meta token Ref::ATTRIBS is a named GenNode,
and thus trivially responds to isNamed() == true
...instead of using a hand written implementation,
the idea is to rely on the now implemented building blocks,
with just some custom closures to make it work.
- esp. verify the proper inclusion of the Selector closure in all Operations
- straighten the implementation of Attribute binding
- clean-up the error checking helpers
In Theory, acceptSrc and skipSrc are to operate symmetrically,
with the sole difference that skipSrc does not move anything
into the new content.
BUT, since skipSrc is also used to implement the `skip` verb,
which serves to discard garbage left back by a preceeding `find`,
we cannot touch the data found in the src position without risk
of SEGFAULT. For this reason, there is a dedicated matchSrc operation,
which shall be used to generate the verification step to properly
implement the `del` verb.
I've spent quite some time to verify the logic of predicate evaluation.
It seems to be OK: whenever the SELECTOR applies, then we'll perform
the local match, and then also we'll perform the skipSrc. Otherwise,
we'll delegate both operations likewise to the next lower layer,
without touching anything here.
This is the first skeleton to combine all the building blocks,
and it passes compilation, while of course most of the binding
implementation still needs to be filled in...
- default recommendation is to implement DiffMutable interface
- ability to pick up similar non-virtual method on target
- for anything else client shall provide free function mutatorBinding(subject)
PERSONAL NOTE: this is the first commit after an extended leave,
where I was in hospital to get an abdominal cancer removed.
Right now it looks like surgery was successful.
this is at the core of the integration problem: how do we expose
the ability of some opaque data structure to create a TreeMutator?
The idea is
- to use a marker/capability interface
- to use template specialisation to fabricate an instance of that interface
based on the given access point to the opaque data structure
but unfortunately this runs straight into a tough problem,
which I tried to avoid and circumvent all the time:
At some point, we're bound to reveal the concrete type
of the Mutator -- at least to such an extent that we're
able to determine the size of an allocator buffer.
Moreover, by the design chosen thus far, the active
TreeMutator instance (subclass) is assumed to live within
the top-level of a Stack, which means that we need to
place-construct it into that location. Thus, either
we know the type, or we need to move it into place.
the plan is to put together an integration test
of diff application to opaque data through the TreeMutator,
using the now roughly finished binding primitives.
moreover, the idea is to apply precisely the same diff sequence,
as was used in the detail test (TreeMutatorBinding_test).
NOTE: right now, the existing placehoder code applies this sequence
onto a Rec<GenNode>. This should work already -- and it does,
BUT the result of the third step is wrong. Really have to
investigate this accidental finding, because this highlights
a conceptual mismatch in the handling of mixed scopes.
...which mostly just is either ignoring the
operations or indicating failure on attempt to
'reorder' attributes (which don't have any notion of 'ordering')
overall, the structure of this implementation is still rather confusing,
yet any alternatives seem even less convincing
- if we want to avoid the delegation to base-class, we'd have
to duplicate several functions and the combined class would
handle two distinct concerns.
- any attempt to handle the IDs more "symmetrically" seems to
create additional problems on one side or the other
this also supersedes and removes the initial implementation
draft for attribute binding with the 'setAttribute' API
The elementary part of diff application incl. setting
new attribute values works by now.
The way we build this attribute binding, there is no single
entity to handle all attribute bindings. Thus the only way
to detect a missing binding is when none of the binding layers
was able to handle a given INS verb
the idea is again to perform the same sequence of primitives,
this time with a binding to some local variables within the test function
here to enact the role of "object fields"
together with drafting the first segment of the test code,
I've settled down onto an implementation approach
the plan is to use this specific diff sequence
both in the individual binding tests, and in a
more high level integration test. Hopefully this
helps to make these quite technical tests more readable
..as concluded from the preceding analysis.
NOTE this entails a semantical change, since this
predicate is now only meant to be indicative, not conclusive
remarks: the actual implementation of the diff application process
as bound via the TreeMutator remains yet to be written...
how can ordinary object fields be treated as "Attributes"
and thus tied into the Diff framework defined thus far.
This turns out to be really tricky, even questionable
while simple to add into the implementation, this whole feature
seems rather qestionable to me now, thus I've added a Ticket
to be revisited later.
In a nutshell, right here, when implementing the binding layer
for STL collections, it is easy to enable the framework to treat
Ref::THIS properly, but the *actual implementation* will necessarily
be offloaded onto each and every concrete binding implementation.
Thus client code would have to add support for an rather obscure
shortcut within the Diff language. The only way to avoid this
would be to change the semantics of the "match"-lambda: if this
binding would rather be a back-translation of implementation data
into GenNode::ID values, then we'd be able to implement Ref::THIS
natively. But such an approach looks like a way inferiour deisgn
to me; having delegated the meaning of a "match" to the client
seems like an asset, since it is both natural and opens a lot
of flexibility, without adding complexity.
For that reason I tend to avoid that shortcut now, in the hope
to be able to drop it entirely from the language
write down a first draft for a definiton section,
to describe the fundamental parts involved, when
applying a diff message onto implementation defined
data structures
After a break of tree weeks, I found it difficult to find may way
amidst all those various levels of abstraction. In addition to this
definition, we'll probably also need a high level overview of the
whole diff system operation.
...all of this implementation boils down to slightly adjusting
the code written for the test-mutation-target. Insofar it pays off now
having implemented this diagnostic and demonstration first.
Moreover I'm implementing this basic scheme of "diff application"
roughly the fourth time, thus things kindof fall into place now.
What's really hard is all those layers of abstraction in between.
Lesson learned (after being off for three weeks, due to LAC and
other obligations): I really need to document the meaning of the
closures, and I need to document the "abstract operational semantics"
of diff application, otherwise no one will be able to provide
the correct closures.
while I still keep my stance not to allow reflection and
switch-on-type, access to the internal / semantic type of
an embedded record seems a valid compromise to allow
to deal with collections of object-like children
of mixed kind.
Indirectly (and quite intentional) this also opens a loophole
to detect if a given GenNode might constitute a nested scope,
but with the for the actual nested element indeed to cary
a type symbol. Effectively this limits the use of this shortcut
to situations where the handling context does have some pre-established
knowledge about what types *might* be expected. This is precisely
the kind of constraint I intend to uphold: I do not want the
false notion of "total flexibility", as is conveyed by introspection.
the whole implementation will very much be based on
my experiences with the TestMutationTarget and TestWireTap.
Insofar it was a good idea to implement this test dummy first,
as a prototype. Basically what emerges here is a standard pattern
how to implement a tree mutator:
- the TreeMutator will be a one-way-off "throwaway" object.
- its lifecylce starts with sucking away the previous contents
- consuming the diff moves contents back in place
- thus the mutator always attaches onto a target by reference
and needs the ability to manipulate the target
the collection binding can be configured with various
lambdas to supply the basic building blocks of the generated binding.
Since we allow picking up basically anything (functors,
function pointers, function objects, lamdas), and since
we speculate on inlining optimisation of lambdas, we can not
enforce a specific signature in the builder functions.
But at least we can static_assert on the effective signature
at the point where we're generating the actual binding configuration
re-evaluated the decision to build on lambdas, not virtual functions:
- it leads for sure to clearer code at the usag site
- it /might/ offer better, but certainly not worse potential for compiler optimisation
...and write down some insights about the architecure
and design of tree binding and tree description related
to the TreeMutator.
When reading my notes from last year, it became clear
to me that the design of the TreeMutator has evolved
significantly, and became quite something different
than I'd imagined at start
now the full API for the "mutation primitives" is shaped.
Of course the actual implementation is missing, but that
should be low hanging fuit by now.
What still requires some thinking though is how to implement
the selector, so we'll actually get a onion shaped decorator
basically we'll establish a collaboration where both sides
know only the interface (contract) of the partner; a safe margin
for allocation size has to be established through metaprogramming (TODO)
...basically we've now the list mutation primitives working,
albeit in a test/dummy implementation only. Next steps will
be to integrate the assignment and sub scope primitives,
and then to re-do the same implementation respectively
for the case of mutating a standard collection of arbitrary type
now this feels like making progress again,
even when just writing stubs ;-)
Moreover, it became clear that the "typing" of typed child collections
will always be ad hoc, and thus needs to be ensured on a case by case
base. As a consequence, all mutation primitives must carry the
necessary information for the internal selector to decide if this
primitive is applicable to a given decorator layer. Because
otherwise it is not possible to uphold the concept of a single,
abstracted "source position", where in fact each typed sub-collection
of children (and thus each "onion layer" in the decorator chain)
maintains its own private position
after sleeping one night over the problem, this seems to be
the most natural solution, since the possibility of assignment
naturally arises from the fact that, for tree diff, we have
to distinguish between the *identity* of an element node and
its payload (which could be recursive). Thus, IFF the payoad
is an assignable value, why not allow to assign it. Doing so
elegnatly solves the problem with assignment of attributes
Signed-off-by: Ichthyostega <prg@ichthyostega.de>
This basically finishes definition of the fundamental
UI-Element and Bus protocol -- with one notable exception:
how to mutate elements by diff.
This will be the next topic to address
...and I made the decision *not* to consider any kind of
generic properties for now. YAGNI.
UI coding is notorious spaghetti code.
No point in fighting that, it is just the way it is,
because somewhere you're bound to get concrete, hands-on.
still TODO: the ability to use immutable types
within the command framework. In theory, this
shouldn't be had to implement, since we're creating
a new opaque value holder within the command registry
anyway, so it should be sufficient to refrain from
re-assigning a new value tuple. This is relevant,
since e.g. our time framework is built on immutable
value types.
as it turns out, this is a Bug in GCC 4.9 (resolved in 5.x)
See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63723
Problem is, GCC emits a warning on narrowing conversions,
while the standard actually disallows them when building
objects from brace-enclosed initialisers.
Unfortunately GCC also emits such a warning from within
a SFINAE context, instead of letting those spurious dangerous
cases fail. So we end up with additional visitor double dispatch
paths, and a lot of additional warnings.
Temporary solution is to hack a custom trait, which
explicitly declares some conversions paths as "narrowing".
Probably this can be implemented in a way more intelligent
way (using std::numeric_limits), but this doesn't seem
worth the effort, since the problem will go away through
compiler evolution eventually.
now we're able to construct suitable parameter values from the
arguments passed embedded in the GenNodes, as is demonstrated with the
EntryID<long> constructed from an ID-string. We really need a full-blown
double-dispatch, since the content type of the concrete GenNode is only
known at runtime (encoded in the RTTI)
There is still the problem with generating some spurions additional
conversion pathes, some of which are narrowing (and thus dangerous).
The copiler emits several warnings here, and all of them are justified.
E.g. it would be possible to pass an int64_t in the GenNode and initialise
a short from it. This might be convenient at times, but I tend rather to
be prohibitive here and thus consider to built in distinct limitations
on the allowed conversions.
not sure yet if any of this works, because the
technicalities of dealing with variadic types are
quite different to our LISP-style typelist processing.
The good news is that with variadic templates it is
indeed possible, to supply dynamically picked arguments
to another function taking arbitrary arguments.
This all relies on the feature to unpack argument packs,
and, more specifically, about the possiblity to "wrap"
this unpacking around interspersed function call syntax
template<size_t...i>
Xyz
do_something(MyTuple myTuple)
{
return Xyz (std::get<i> (myTuple) ... );
}
Here the '...' will be applied to the i... and then
the whole std::get-construct will be wrapped around
each element. Mind bogging, but very powerful
First part is to define the steps (the protocol) at the
model element level, which gets a command prepared and invoked.
Test fails still, because there is no actual argument binding
invoked in the TestNexus
the initial draft of this concept is in place now, and
the first round of unit tests pass. I've got some understanding
of the purpose of the interactions and involved elements
and I'm confident this design is evolving in a sane way.
Note: extensive documentation is in the TiddlyWiki,
here I've just pasted and reworded some paragraphs from there
and integrated them into the Doxygen docs
next step will be to rig the mock element and set up
and cover the basic / generic element behaviour
This changeset
- adapts the (planned) unit test to the semantic of
the EventLog, which is now fully implemented
- adjusts the function names on the public Tangible interface,
to be better in line with the naming convention of the
corrsponding operations on the UI-Bus:
* "mark" operations are towards the UI element
* "note" messages are from the UI element towards some
state manager, which can be reached via the bus
what you see here now is just the tip of the icebearg...
If we follow this route, the Lumiera UI will become way more
elaborate and responsive than average desktop applications
looked at
- boost property_tree
- boost spirit
- JSON-C
- JsonCpp
- rapidjson
- vjson / gason
the results where quite obvious: --> rapidjson
Rationale: we do not want yet another object system;
rapidjson has an SAX-style API (and additionally an DOM API
if needed). And it is fast, supports in-situ parsing,
extended Unicode support, full JSON support and
an exchangable allocator.
License is MIT
NOTE: we have the policy to always support current Debian/stable
amd at least one Ubuntu LTS release, unless hard dependency problems prevent that.
Currently, Ubuntu/Trusty is already a bit dated, but the only problematic dependency
could be libboost (1.54 in Trusty, 1.55 in Jessie).
GCC-4.8 can be replaced by GCC-4.9 in Trusty without problems
It is always a bit tricky to find out the precise lower boundary,
so we try to upgrade these requirements as our platform progresses.
For now we have used the level available on Ubuntu/Trusty to set
the lower constraints for most libraries
This is a development snaphot pre release of Lumiera.
It features codebase maintenance, upgrade to C++14 and GTK-3
and some work towards a Proc-GUI connection (unfinished)
Update README, AUTHORS, LICENSE and similar release docs.
because otherwise we'd need to send a whole subtree
over the wire and then descend into it just to find an element.
This too is a ripple effect of making '==' deep
well... this was quite a piece of work
Added some documentation, but a complete documentation,
preferably to the website, would be desirable, as would
be a more complete test covering the negative corner cases
It is difficult to reconcile our general architecture for the
linearised diff representation with the processing of recursive,
tree-like data structures. The natural and most clean way to
deal with trees is to use recursion, i.e. the processor stack.
But in our case, this means we'd have to peek into the next
token of the language and then forward the diff iterator
into a recursive call on the nested scope. Essentially, this
breaks the separation between receiving a token sequence and
interpretation for a concrete target data structure.
For this reason, it is preferrable to make the stack an
internal state of the concrete interpreter. The downside of
this approach is the quite confusing data storage management;
we try to make the role of the storage elements a bit more
clear through descriptive accessor functions.
each language token of our "linearised diff representation"
carries a payload data element, which typically is the piece
of data to be altered (added, mutated, etc).
Basically, these elements have value semantics and are
"sent over wire", and thus it seems natural when the
language interpreter functions accept that piece of payload
by-value. But since we're now sending GenNode elements as
parameter data in our diff, which typically are of the
size of 10 data elements (640 bit on a 64bit machine),
it seems more resonable to pass these argument elements
by const& through the interpreter function. This still
means we can (and will indeed) copy the mutated data
values when applying the diff, but we're able to
relay the data more efficiently to the point where
it's consumed.
this boils down to the two alternatives
- manipulate the target data structure
- build an altered copy
since our goal is to handle large tree structures efficiently,
the decision was cast in favour of data manipulation
so basically it's time to explicate the way
our diff language will actually be written.
Similar to the list diff case, it's a linear sequence
of verb tokens, but in this case, the payload value
in each token is a GenNode. This is the very reason
why GenNode was conceived as value object with an
opaque DataCap payload