diff --git a/src/gui/timeline/clip-presenter.cpp b/src/gui/timeline/clip-presenter.cpp index 8a749412e..098fd7a8b 100644 --- a/src/gui/timeline/clip-presenter.cpp +++ b/src/gui/timeline/clip-presenter.cpp @@ -25,6 +25,7 @@ ** Implementation details of clip presentation management. ** ** @todo WIP-WIP-WIP as of 12/2016 + ** @todo as of 10/2018 timeline display in the UI is rebuilt to match the architecture ** */ diff --git a/src/gui/timeline/clip-presenter.hpp b/src/gui/timeline/clip-presenter.hpp index 86876bce1..5ecd0df1d 100644 --- a/src/gui/timeline/clip-presenter.hpp +++ b/src/gui/timeline/clip-presenter.hpp @@ -44,6 +44,7 @@ ** clips, just to indicate some content is actually present in this part of the timeline. ** ** @todo WIP-WIP-WIP as of 12/2016 + ** @todo as of 10/2018 timeline display in the UI is rebuilt to match the architecture ** */ diff --git a/src/gui/timeline/layout-manager.cpp b/src/gui/timeline/layout-manager.cpp index 24d6b9b10..0fbcf6ba1 100644 --- a/src/gui/timeline/layout-manager.cpp +++ b/src/gui/timeline/layout-manager.cpp @@ -25,6 +25,7 @@ ** Implementation details of global timeline layout management. ** ** @todo WIP-WIP-WIP as of 12/2016 + ** @todo as of 10/2018 timeline display in the UI is rebuilt to match the architecture ** */ diff --git a/src/gui/timeline/layout-manager.hpp b/src/gui/timeline/layout-manager.hpp index fb13c20f3..1f7bf5b2f 100644 --- a/src/gui/timeline/layout-manager.hpp +++ b/src/gui/timeline/layout-manager.hpp @@ -72,6 +72,7 @@ ** [MVP pattern](https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93presenter) here. ** ** @todo WIP-WIP-WIP as of 12/2016 + ** @todo as of 10/2018 timeline display in the UI is rebuilt to match the architecture ** */ diff --git a/src/gui/timeline/marker-widget.cpp b/src/gui/timeline/marker-widget.cpp index 52d6760f0..0208ea27b 100644 --- a/src/gui/timeline/marker-widget.cpp +++ b/src/gui/timeline/marker-widget.cpp @@ -25,6 +25,7 @@ ** Implementation of marker display. ** ** @todo WIP-WIP-WIP as of 12/2016 + ** @todo as of 10/2018 timeline display in the UI is rebuilt to match the architecture ** */ diff --git a/src/gui/timeline/marker-widget.hpp b/src/gui/timeline/marker-widget.hpp index e1e5c9e84..63e014b8d 100644 --- a/src/gui/timeline/marker-widget.hpp +++ b/src/gui/timeline/marker-widget.hpp @@ -30,6 +30,7 @@ ** may receive messages over the UI-Bus. ** ** @todo WIP-WIP-WIP as of 12/2016 + ** @todo as of 10/2018 timeline display in the UI is rebuilt to match the architecture ** */ diff --git a/src/gui/timeline/timeline-controller.cpp b/src/gui/timeline/timeline-controller.cpp index 80135b2c1..985b556e5 100644 --- a/src/gui/timeline/timeline-controller.cpp +++ b/src/gui/timeline/timeline-controller.cpp @@ -32,7 +32,7 @@ ** - thus we get a rather simple mapping, with some fixed attributes and no ** flexible child collection. The root track is implemented as TrackPresenter. ** - ** @todo as of 12/2016 a complete rework of the timeline display is underway + ** @todo as of 10/2018 timeline display in the UI is rebuilt to match the architecture ** @see TimelineWidget ** */ diff --git a/src/gui/timeline/timeline-controller.hpp b/src/gui/timeline/timeline-controller.hpp index 23b78b782..44df1e57c 100644 --- a/src/gui/timeline/timeline-controller.hpp +++ b/src/gui/timeline/timeline-controller.hpp @@ -45,7 +45,7 @@ ** - these in turn manage a set of ClipPresenter entities ** - and those presenters care for injecting suitable widgets into the TimelineWidget's parts. ** - ** @todo as of 12/2016 a complete rework of the timeline display is underway + ** @todo as of 10/2018 timeline display in the UI is rebuilt to match the architecture ** */ diff --git a/src/gui/timeline/timeline-widget.cpp b/src/gui/timeline/timeline-widget.cpp index 52a9bc824..cbb72058c 100644 --- a/src/gui/timeline/timeline-widget.cpp +++ b/src/gui/timeline/timeline-widget.cpp @@ -24,7 +24,7 @@ /** @file timeline/timeline-widget.cpp ** Implementation details of Lumiera's timeline display widget. ** - ** @todo as of 12/2016 a complete rework of the timeline display is underway + ** @todo as of 10/2018 a complete rework of the timeline display is underway ** @see timeline-controller.cpp ** */ diff --git a/src/gui/timeline/timeline-widget.hpp b/src/gui/timeline/timeline-widget.hpp index f968b3849..f8c7a7181 100644 --- a/src/gui/timeline/timeline-widget.hpp +++ b/src/gui/timeline/timeline-widget.hpp @@ -47,7 +47,7 @@ ** is `sigc::trackable`, which means after destruction any further signals ** will be silently ignored. ** - ** @todo as of 12/2016 a complete rework of the timeline display is underway + ** @todo as of 10/2018 a complete rework of the timeline display is underway ** */ diff --git a/src/gui/timeline/track-presenter.cpp b/src/gui/timeline/track-presenter.cpp index cc6e3e993..bce8ed079 100644 --- a/src/gui/timeline/track-presenter.cpp +++ b/src/gui/timeline/track-presenter.cpp @@ -25,6 +25,7 @@ ** Implementation details of track presentation management. ** ** @todo WIP-WIP-WIP as of 12/2016 + ** @todo as of 10/2018 timeline display in the UI is rebuilt to match the architecture ** */ @@ -85,10 +86,11 @@ namespace timeline { /** * @note we distinguish between the contents of our three nested child collections - * based on the symbolic type field send in the Record type within the diff representation + * based on the symbolic type field sent in the Record type within the diff representation * - "Marker" designates a Marker object * - "Clip" designates a Clip placed on this track * - "Fork" designates a nested sub-track + * @see TimelineController::buildMutator() for a basic explanation of the data binding mechanism */ void TrackPresenter::buildMutator (TreeMutator::Handle buffer) diff --git a/src/gui/timeline/track-presenter.hpp b/src/gui/timeline/track-presenter.hpp index 41df348f5..9e71323d4 100644 --- a/src/gui/timeline/track-presenter.hpp +++ b/src/gui/timeline/track-presenter.hpp @@ -24,7 +24,7 @@ /** @file track-presenter.hpp ** Presentation control element to model and manage a track within the timeline UI. ** In the Lumiera timeline UI, we are mixing two different scope of concerns: For one, - ** we have the globally tangible scope of actual session elements an operations performed + ** we have the globally tangible scope of actual session elements and operations performed ** on those. And then there are more local considerations regarding the "mechanics" of the ** UI elements, their state and immediate feedback to user interactions. The _Presenter_ -- ** as known from the [MVP pattern](https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93presenter) -- @@ -36,9 +36,10 @@ ** part of layout building, delegating to a mostly passive GTK widget for the actual display. ** This way it becomes possible to manage the actual UI resources on a global level, avoiding to ** represent potentially several thousand individual elements as GTK entities, while at any time - ** only several can be visible and active as far as user interaction is concerned. + ** only a small number of elements can be visible and active as far as user interaction is concerned. ** ** @todo WIP-WIP-WIP as of 12/2016 + ** @todo as of 10/2018 timeline display in the UI is rebuilt to match the architecture ** */ diff --git a/wiki/renderengine.html b/wiki/renderengine.html index 655a0034a..a6a6f8ceb 100644 --- a/wiki/renderengine.html +++ b/wiki/renderengine.html @@ -1997,7 +1997,7 @@ Some observations: * Optimisation achieves access times around ≈ 1ns -
Along the way of working out various [[implementation details|ImplementationDetails]], decisions need to be made on how to understand the different facilities and entities and how to tackle some of the problems. This page is mainly a collection of keywords, summaries and links to further the discussion. And the various decisions should allways be read as proposals to solve some problem at hand...
''Everything is an object'' — yes of course, that's a //no-brainer,// todays. Rather, important is to note what is not "an object", meaning it can't be arranged arbitrarily
@@ -2011,6 +2011,8 @@ We ''separate'' processing (rendering) and configuration (building). The [[Build
''Objects are [[placed|Placement]] rather'' than assembled, connected, wired, attached. This is more of a rule-based approach and gives us one central metaphor and abstraction, allowing us to treat everything in an uniform manner. You can place it as you like, and the builder tries to make sense out of it, silently disabling what doesn't make sense.
An [[Sequence]] is just a collection of configured and placed objects (and has no additional, fixed structure). [["Tracks" (forks)|Fork]] form a mere organisational grid, they are grouping devices not first-class entities (a track doesn't "have" a pipe or "is" a video track and the like; it can be configured to behave in such manner by using placements though). [[Pipes|Pipe]] are hooks for making connections and are the only facility to build processing chains. We have global pipes, and each clip is built around a lokal [[source port|ClipSourcePort]] — and that's all. No special "media viewer" and "arranger", no special role for media sources, no commitment to some fixed media stream types (video and audio). All of this is sort of pushed down to be configuration, represented as asset of some kind. For example, we have [[processing pattern|ProcPatt]] assets to represent the way of building the source network for reading from some media file (including codecs treated like effect plugin nodes)
+The model in Proc-Layer is rather an //internal model.// What is exposed globally, is a structural understanding of this model. In this structural understanding, there are Assets and ~MObjects, which both represent the flip side of the same coin: Assets relate to bookkeeping, while ~MObjects relate to building and manipulation of the model. In the actual data represntation within the HighLevelModel, we settled upon some internal reductions, preferring either the //Asset side// or the //~MObject side// to represent some relevant entities. See → AssetModelConnection.
+
Actual ''media data and handling'' is abstracted rigorously. Media is conceived as being stream-like data of distinct StreamType. When it comes to more low-level media handling, we build on the DataFrame abstraction. Media processing isn't the focus of Lumiera; we organise the processing but otherwise ''rely on media handling libraries.'' In a similar vein, multiplicity is understood as type variation. Consequently, we don't build an audio and video "section" and we don't even have audio tracks and video tracks. Lumiera uses tracks and clips, and clips build on media, but we're able to deal with [[multichannel|MultichannelMedia]] mixed-typed media natively.
Lumiera is not a connection manager, it is not an audio-visual real time performance instrument, and it doesn't aim at running presentations. It's an ''environment for assembling and building up'' something (an edit, a session, a piece of media work). This decision is visible at various levels and contexts, like a reserved attitude towards hardware acceleration (it //will// be supported, but reliable proxy editing has a higher priority), or the decision, not to incorporate system level ports and connections directly into the session model (they are mapped to [[output designations|OutputDesignation]] rather)
@@ -2686,7 +2688,7 @@ As starting point, {{red{in winter 2016/17}}} the old (broken) timeline panel wa
After that initial experiments, my focus shifted to the still unsatisfactory top-level UI structure, and I am working towards an initial integration with Proc-Layer since then.
//The representation of a [[media clip|Clip]] for manipulation by the user within the UI.//
Within Lumiera, a clip is conceived as a //chunk of media,// which can be handled in compound. Clip as such is an abstract concept, which is treated with minimal assumptions...
* we know that a clip has //media content,// which need not be uniform and can be inherently structured (e.g. several media, several channels)
@@ -2720,7 +2722,7 @@ Starting from the architecture as predetermined by the UI-Bus, it is clear that
!!!how to carry out the clip appearances
Initially, one could think that we'd need to build several widgets to realise the wide variety of clip appearances. But in fact it turns out that we're able to reshape a single base widget to encompass all the necessary presentation styles. This base widget is a simple, one-element container, with {{{Gtk::Frame}}} being the most obvious pick. This gives us already a rectangular covered space, and the ability to add a label widget, optionally with controlled alignment of the label. All the more elaborate presentation styles can be achieved by adding a canvas widget into this frame and then placing additional stuff on top of that. The only tricky part arises in overview display, when just some clip rectangle can stand-in for a whole series of clips, which themselves remain hidden as UI elements.
-A prime consideration regarding this whole clip presentation strategy is the performance concern. It is quite common for movie edits to encompass several hundred individual clips. Combined with several tracks and an elaborate audio edit, it may well happen that we end up with thousands of individual UI objects. If treated naively, this load might seriously degrade the responsiveness of the interface. Thus we need to care for the relevant infrastructure to enable optimisation of the display. For that reason, the memory footprint of the ClipPresenter and the basic widget has to be kept as small as possible. And moreover, since we do our own layout management in the timeline display, in theory it is possible any time just to add those widgets to the enclosing GTK container, which are actually about to become visible. (if we follow this approach, a problem yet to be solved is how to remove widgets falling out of sight, since removing N widgets easily turns into a quadratic operation).
+A prime consideration regarding this whole clip presentation strategy is the performance concern. It is quite common for movie edits to encompass several hundred individual clips. Combined with several tracks and an elaborate audio edit, it may well happen that we end up with thousands of individual UI objects. If treated naively, this load might seriously degrade the responsiveness of the interface. Thus we need to care for the relevant infrastructure to enable optimisation of the display. For that reason, the memory footprint of the ClipPresenter and the basic widget has to be kept as small as possible. And moreover, since we do our own layout management in the timeline display, in theory it is possible //only to add// those widgets to the enclosing GTK container, which are //actually about to become visible.// (if we follow this approach, a problem yet to be solved is how to remove widgets falling out of sight, since removing N widgets easily turns into a quadratic operation).
!clip content rendering
In a typical editing application, the user can expect to get some visual clue regarding the media content of a clip. For example, sound clips can be visualised as waveform, while movie clips might feature a sequence of images taken from the video. Our intention is to ''use our own rendering engine'' to produce these thumbnails. In fact, our engine is perfectly suited for this task: it has precisely the necessary media decoding and rendering abilities, plus it offers an elaborate system of priorities and deadlines, allowing to throttle the load produced by thumbnail generation. In addition to all those qualities, our engine is planned to be complemented by an "intelligent" frame cache, which, given proper parametrisation, ensures the frequently requested thumbnails will be available for quick display. For this approach to work, we need to provide some infrastructure
@@ -2730,7 +2732,7 @@ In a typical editing application, the user can expect to get some visual clue re
* we still need to work out how buffer management for this task will be handled; it should be a derivative of typical buffer management for display rendering.
* the clip widget needs to provide a simple placeholder drawing to mark the respective space in the interface, until the actual preview arrives.
-To start with, mostly this means to avoid a naive approach, like having code in the UI to pull in some graphics from media files. We certainly won't just render every media channel blindly. Rather, we acknowledge that we'll have a //strategy,// depending on the media content and some further parameters of the clip. This might well just be a single ''pivot image'' chosen explicitly by the editor to represent a given take. And the actual implementation of content preview rendering will largely be postponed until we get our rendering engine into a roughly working state.
+To start with, mostly this means to avoid a naive approach, like having code in the UI to pull in some graphics from media files. We certainly won't just render every media channel blindly. Rather, we acknowledge that we'll have a //strategy,// depending on the media content and some further parameters of the clip. This might well just be a single ''pivot image'' chosen explicitly by the editor to represent a given take. Seemingly, the proper place to hose that display strategy is ''within the session model'', not within the UI. And the actual implementation of content preview rendering will largely be postponed until we get our rendering engine into a roughly working state.
A specially configured LumieraPlugin, which actually contains or loads the complete code of the (GTK)GUI, and additionally is linked dynamically against the application core lib. During the [[UI startup process|GuiStart]], loading of this Plugin is triggered from {{{main()}}}. Actually this causes spawning of the GTK event thread and execution of the GTK main loop.
Within the Lumieara GUI, the [[Timeline]] structure(s) from the HighLevelModel are arranged and presented according to the following principles and conventions. Several timeline views may be present at the same time -- and there is not necessarily a relation between them, since »a Timeline« is the top-level concept within the [[Session]]. Obviously, there can also be several //views// based on the same »Timeline« model element, and in this latter case, these //coupled views// behave according to a linked common state. An entity »Timeline« as represented through the GUI, emerges from the combination of several model elements * a root level [[Binding|BindingMO]] acts as framework @@ -3436,7 +3438,7 @@ Several timeline views may be present at the same time -- and there is not neces Session, Binding and Sequence are the mandatory ingredients. !Basic layout -[>img[Clip presentation control|draw/UI-TimelineLayout-1.png]]The representation is split into a ''Header pane'' exposing structure and configuration, and a ''Content pane'' extending in time. The ''Time ruler'' running alongside the top of the content pane represents the //position in time.// Beyond this temporal dimension, the content area is conceived as a flexible working space. This working space //can// be structured hierarchically -- when interacting with the GUI, hierarchical nesting will be created and collapsed on demand. Contrast this with conventional editing applications which are built upon the rigid notion of "Tracks": Lumiera is based on //Pipes// rather than Tracks. +[>img[Clip presentation control|draw/UI-TimelineLayout-1.png]]The representation is split into a ''Header pane'' exposing structure and configuration, and a ''Content pane'' extending in time. The ''Time ruler'' running alongside the top of the content pane represents the //position in time.// Beyond this temporal dimension, the content area is conceived as a flexible working space. This working space //can// be structured hierarchically -- when interacting with the GUI, hierarchical nesting will be created and collapsed on demand. Contrast this with conventional editing applications which are built upon the rigid notion of "Tracks": Lumiera is based on //Pipes// and //Scopes// rather than Tracks. In the temporal dimension, there is the usual scrolling and zooming of content, and possibly a selected time range, and after establishing a ViewerPlayConnection, there is an effective playback location featured as a "Playhead" The workspace dimension (vertical layout) is more like a ''Fork'', which can be expanded recursively. More specifically, each strip or layer or "track" can be featured in //collapsed// or //expanded state.// @@ -3469,7 +3471,7 @@ In the most general case, there can be per-track content and nested content at t → important question: how to [[organise the widgets|GuiTimelineWidgetStructure]]
The Timeline is probably the most prominent place in the GUI where we need to come up with a custom UI design.
Instead of combining standard components in one of the well-known ways, here we need to come up with our own handling solution -- which also means to write one or several custom GTK widgets. Thus the question of layout and screen space division and organisation becomes a crucial design decision. The ~GTK-2 Gui, as implemented currently, did already take some steps along this route, yet this kind of decision should be cast and documented explicitly (be it after the fact).
@@ -3514,7 +3516,7 @@ Applying a diff changes the structure, that is, the structure of the local model
* when a model element happens to be destructed, the corresponding display element has to be removed.
* such might be triggered indirectly, by clean-up of leftovers, since the {{{DiffApplicator}}} re-orders and deletes by leaving some data behind
* the diff also re-orders model elements, which does not have an immediate effect on the display, but needs to be interpreted separately.
-Together this means we get a fix up stage after model changes, where the display is re-adjusted to fit the new situation. This works in concert with the [[display manager|TimelineDisplayManager]] representing only those elements as actual widgets, which get a real chance to become visible. This way we can build on the assumption that the actual number of widgets to be managed any time remains so small as to get away with simple linear list processing. It remains to be seen how far this assumption can be pushed -- the problem is that the GTK container components don't support anything beyond such simple linear list processing; there isn't even a call to remove all child widgets of a container in a single pass.
+Wrapping this together we get a fix up stage after model changes, where the display is re-adjusted to fit the new situation. This works in concert with the [[display manager|TimelineDisplayManager]] representing only those elements as actual widgets, which get a real chance to become visible. This way we can build on the assumption that the actual number of widgets to be managed any time remains so small as to get away with simple linear list processing. It remains to be seen how far this assumption can be pushed -- the problem is that the GTK container components don't support anything beyond such simple linear list processing; there isn't even a call to remove all child widgets of a container in a single pass.
While the low-level model holds the data used for carrying out the actual media data processing (=rendering), the high-level model is what the user works upon when performing edit operations through the GUI (or script driven in »headless mode«). Its building blocks and combination rules determine largely what structures can be created within the [[Session]].
On the whole, it is a collection of [[media objects|MObjects]] stuck together and arranged by [[placements|Placement]].
@@ -3577,12 +3579,12 @@ Within Lumiera, "tracks" (actually implemented as [[forks|Fork]]) are
As placements have the ability to cooperate and derive any missing placement specifications, this creates a hierarchical structure throughout the session, where parts on any level behave similar if applicable. For example, when a fork ("track") is anchored to some external entity (label, sync point in sound, etc), all objects placed relatively to this track will adjust and follow automatically. This relation between the track tree and the individual objects is especially important for the wiring, which, if not defined locally within an ~MObject's placement, is derived by searching up this track tree and utilizing the wiring plug locating pins found there, if applicable. In the default configuration, the placement of an sequence's root track contains a wiring plug for video and another wiring plug for audio. This setup is sufficient for getting every object within this sequence wired up automatically to the correct global output pipe. Moreover, when adding another wiring plug to some sub track, we can intercept and reroute the connections of all objects creating output of this specific stream type within this track and on all child tracks.
-Besides routing to a global pipe, wiring plugs can also connect to the source port of an ''meta-clip''. In this example session, the outputs of ~Seq-2 as defined by locating pins in it's root track's placement, are directed to the source ports of a [[meta-cllip|VirtualClip]] placed within ~Seq-1. Thus, within ~Seq-1, the contents of ~Seq-2 appear like a pseudo-media, from which the (meta) clip has been taken. They can be adorned with effects and processed further completely similar to a real clip.
+Besides routing to a global pipe, wiring plugs can also connect to the source port of an ''meta-clip''. In this example session, the outputs of 'Seq-2' as defined by locating pins in it's root track's placement, are directed to the source ports of a [[meta-cllip|VirtualClip]] placed within 'Seq-1'. Thus, within 'Seq-1', the contents of 'Seq-2' appear like a pseudo-media, from which the (meta) clip has been taken. They can be adorned with effects and processed further completely similar to a real clip.
Finally, this example shows an ''automation'' data set controlling some parameter of an effect contained in one of the global pipes. From the effect's POV, the automation is simply a ParamProvider, i.e a function yielding a scalar value over time. The automation data set may be implemented as a bézier curve, or by a mathematical function (e.g. sine or fractal pseudo random) or by some captured and interpolated data values. Interestingly, in this example the automation data set has been placed relatively to the meta clip (albeit on another track), thus it will follow and adjust when the latter is moved.
This wiki page is the entry point to detail notes covering some technical decisions, details and problems encountered in the course of the years, while building the Lumiera application. * [[Packages, Interfaces and Namespaces|InterfaceNamespaces]] @@ -3619,6 +3621,7 @@ Finally, this example shows an ''automation'' data set controlling some paramete * how to shape the GuiConnection: with the help of a mediating GuiModel, which acts as UI-Bus, exchanging TreeDiffModel messages for GuiModelUpdate * shape the Backbone of the UI, the UI-Bus and the GuiTopLevel * build a framework for InteractionControl in the User Interface +* establish a flexible layout structure for the GuiTimelineView
Open issues, Things to be worked out, Problems still to be solved... !!Parameter Handling @@ -5987,9 +5990,10 @@ We need to work out guidelines for dealing with operations going on simultaneous !!the perils of data representation In software development, there is a natural inclination to cast "reality" into data, the structure of which has to be nailed down first. Then, everyone might "access reality" and work on it. Doing so might sounds rational, natural, even self-evident and sound, yet as compelling as it might be, this approach is fundamentally flawed. It is known to work well only for small, "handsome" projects, where you clearly know up-front what you're up to: namely to get away, after being paid, before anyone realises the fact you've built something that looks nice but does not fit. -So the challenge of any major undertaking in software construction is //not to build an universal model.// Rather, we want to arrive at something that can be made to fit. Over and over again. +So the challenge of any major undertaking in software construction is //not to build an universal model of truth.// Rather, we want to arrive at something that can be made to fit. +Which can be remoulded, over and over again, without breaking down. -More specifically, we start building something, and while under way, our understanding sharpens, and we learn that actually we want something entirely different. Yet still we know what we need and we don't want just something arbitrary. There is a constant core in what we're headed at, and we need the ability to //settle matters.// We need a backbone to work against, a skeleton to support us with its firmness, while also offering joints and links, to be bent and remoulded without breakage. The distinctive idea to make such possible is the principle of subsidiarity. The links and joints between autonomous centres can be shaped to be in fact an exchange, a handover based on common understanding of the //specific matters to deal with,// at that given joint. +More specifically, we start building something, and while under way, our understanding sharpens, and we learn that actually we want something entirely different. Yet still we know what we need and we don't want just something arbitrary. There is a constant core in what we're headed at, and we need the ability to //settle matters.// We need a backbone to work against, a skeleton to support us with its firmness, while also offering joints and links, to be bent and remoulded without breakage. The distinctive idea to make such possible is the principle of ''Subsidiarity''. The links and joints between autonomous centres can be shaped to be in fact an exchange, a handover based on common understanding of the //specific matters to deal with,// at that given joint.
The Session contains all information, state and objects to be edited by the User. From a users view, the Session is synonymous to the //current Project//. It can be [[saved and loaded|SessionLifecycle]]. The individual Objects within the Session, i.e. Clips, Media, Effects, are contained in one (or several) collections within the Session, which we call [[Sequence]]. → [[Session design overview|SessionOverview]] → Structure of the SessionInterface @@ -6910,6 +6914,7 @@ The Session object is a singleton — actually it is a »~PImpl«-Facade * the [[Asset subsystem|AssetManager]] is tightly integrated; besides, there are some SessionServices for internal use → see [[relation of timeline, sequences and objects|TimelineSequences]] +→ see //clarification of some fine points// regarding [[relation of Assets and MObjects|AssetModelConnection]] !Session lifecycle @@ -6962,7 +6967,7 @@ The session and the models rely on dependent objects beeing kept updated and con → see [[details here...|ModelDependencies]]
"Session Interface" has several meanings, depending on the context.
;application global
:the session is a data structure, which can be saved and loaded, and manipulated by [[sending commands|CommandHandling]]
@@ -7006,6 +7011,10 @@ To the contrary, the ''generic'' API is related to a //current location (state),
* to add
* to destroy
+!!relation of [[Assets|Asset]] and MObjects
+{{red{WARNING -- still under construction 10/2018}}}. Basically these segments represent the flip sides of the same coin. //Assets relate to the bookkeeping view.// However, we build a data model, and thus use representations for the involved entities. This creates some redundancy at times; we made an effort to reduce this redundancy and minimise the necessary data model representation. This means, some things are rather handled and represented as Assets, while other stuff is primarily dealt with as ~MObject.
+→ see //discussion of some fine points// regarding [[relation of Assets and MObjects|AssetModelConnection]]
+
!!exploring session contents
Typically, the list of timelines serves as starting point for exploring the model. Basically, any kind of object could be attached everywhere, but both the GUI and the Builder rely on assumptions regarding the [[overall model structure|HighLevelModel]] — silently ignoring content not in line. This corresponds to the //dedicated API functions// on specific kinds of objects, which allow to retrieve content according to this assumed structure conveniently and with strong typing. From the timeline and the associated sequence you'll get the root track, and from there the sub-tracks and the clips located on them, which in turn may have attachments (effects, transitions, labels).
On the other hand, arbitrary structures can be retrieved using the //generic API:// Contents can be discovered on the QueryFocus, which automatically follows the //point of mutation,// but can be moved to any point using the {{{QueryFocus::attach(obj)}}} function.
diff --git a/wiki/thinkPad.ichthyo.mm b/wiki/thinkPad.ichthyo.mm
index 30f3dc6c2..e77f228e0 100644
--- a/wiki/thinkPad.ichthyo.mm
+++ b/wiki/thinkPad.ichthyo.mm
@@ -38,15 +38,18 @@
-
+
-
-
+
+
+
+
+
@@ -70,8 +73,7 @@
TODO: Infos zusammentragen und dokumentieren