diff --git a/doc/design/application/Config.txt b/doc/design/application/Config.txt index e30cb4a85..e6cbaf1e7 100644 --- a/doc/design/application/Config.txt +++ b/doc/design/application/Config.txt @@ -10,6 +10,6 @@ The Lumiera application uses two quite different sources for configuration to be resolve employing a rules based system '(planned)'. Configuration rules will be provided by the application (defaults), a session template and rules stored in the actual session. --> see also the link:{ldoc}/technical/backend/ConfigLoader.html[Config Loader brainstorming from 2008] (implementation details) +-> see also the link:{ldoc}/technical/vault/ConfigLoader.html[Config Loader brainstorming from 2008] (implementation details) diff --git a/doc/design/architecture/playRender.txt b/doc/design/architecture/playRender.txt index 0f9060de5..c5350e4e8 100644 --- a/doc/design/architecture/playRender.txt +++ b/doc/design/architecture/playRender.txt @@ -170,7 +170,7 @@ where to place quantisation: the Vault, the GUI, the player and the session. - putting it into the vault layer seems to be the most reasonable at first sight: we can ``do away'' with nasty things soon, especially if they are technicalities, ``get a clean state soon'' -- and hasn't frame quantisation something to do - with media data, which is handled in the backend? + with media data, which is handled in the vault? + Well, actually, all of those are pitfalls to trap the unwary. About cleanliness, well, sigh! Doing rounding soon will leave us with a huge diff --git a/doc/design/gui/GuiDiscussion/GuiBrainstormingWontImplement.txt b/doc/design/gui/GuiDiscussion/GuiBrainstormingWontImplement.txt index c32ee9e6a..97aee51d9 100644 --- a/doc/design/gui/GuiDiscussion/GuiBrainstormingWontImplement.txt +++ b/doc/design/gui/GuiDiscussion/GuiBrainstormingWontImplement.txt @@ -63,7 +63,7 @@ The user could be asked to choose their experience level, and more complex optio * Client / Server Model + The server process will act as the master coordinator for the system, and will accept input from multiple GUI clients, and dictate tasks to multiple slave processes (even on separate physical servers). The GUI client application could be multi-platform. File transfer and communication could take place over SSH and make use of SVN for project management. Proxy editing will be the norm, due to the higher resolutions of final videos (the RED Epic will handle 5K). The entire system could easily work on a single Linux workstation, for easy adaptibility from handling home videos to expand to editing cinema films (which could benefit from dedicated GUIs to handle video, sound, etc.). - - Comment: Because the different parts of a project are so tightly integrated, it won't be possible to have one instance of the GUI that only has audio, and another that only does video etc. Moreover, the controller PC will hold the source video data. It is true that we plan to make a distributed backend, but the proc and GUI layers will remain on the controller PC. It's very hard to make a distributed GUI, and even harder to make lumiera have both a distributed front and backend. + - Comment: Because the different parts of a project are so tightly integrated, it won't be possible to have one instance of the GUI that only has audio, and another that only does video etc. Moreover, the controller PC will hold the source video data. It is true that we plan to make a distributed backend, but the core and GUI layers will remain on the controller PC. It's very hard to make a distributed GUI, and even harder to make lumiera have both a distributed front and backend. -- link:joelholdsworth[] [[DateTime(2008-07-21T21:58:48Z)]] * Navigation Systems + diff --git a/doc/design/gui/GuiDiscussion/TimelineDiscussion.txt b/doc/design/gui/GuiDiscussion/TimelineDiscussion.txt index 95491c40a..0faa1a779 100644 --- a/doc/design/gui/GuiDiscussion/TimelineDiscussion.txt +++ b/doc/design/gui/GuiDiscussion/TimelineDiscussion.txt @@ -190,10 +190,10 @@ Hi Joel, I'll summarize some of my thoughts here. Maybe we should split of this discussion to a separate page? Maybe we should carry over more detailed discussions to the Mailinglist? First of all -- I really like your work and the effort to get the GUI reasonable complete but simple. Next, when I choose to -take a certain approach for implementing a basic feature in the Proc-Layer, this +take a certain approach for implementing a basic feature in some lower layer, this doesn't bind the GUI to present it exactly the way it's implemented. Of course, in many cases the GUI will be reasonable close to the actual underlying -implementation, but it's an independent layer after all. The Proc I am building +implementation, but it's an independent layer after all. The Session I am building is very flexible, and I take it for guaranteed that the GUI has the freedom of using some of those possibilities always in one certain limited way. And, finally, please don't take my arguments as general criticism -- I am just diff --git a/doc/design/index.txt b/doc/design/index.txt index 0b256c998..c432b5834 100644 --- a/doc/design/index.txt +++ b/doc/design/index.txt @@ -64,8 +64,8 @@ operation initiated by the user is actually _executed_ in the context of the ses after each change, a component known as _the Builder_ assembles the contents of this session model to transform them into a network of nodes, which can be efficiently _performed_ for rendering. Often, we refer to this node structure as the ``low-level model''. On rendering or -playback, the Proc-layer is responsible for accessing this low-level node structure to -generate individual _frame render jobs,_ ready to be handed over to the backend, which +playback, the Steam-layer is responsible for accessing this low-level node structure to +generate individual _frame render jobs,_ ready to be handed over to the Vault, which finally initiates the actual rendering calculations. + -> more about the link:model/index.html[Model] + -> design of the link:engine/index.html[Engine] subsystem @@ -79,7 +79,7 @@ The Vault Layer attaches to the low-level model and the _render jobs_ generated It actually conducts the rendering operations and makes heavy use of the Input/Output System for accessing media content and sending generated data to the screen or to external output. + --> link:lowlevel/index.html[Backend design level documents] + +-> link:lowlevel/index.html[Lumiera Vault design level documents] + -> link:{ldoc}/technical/vault/index.html[technical documentation] + @@ -104,6 +104,6 @@ heavily on current FLOSS implementations and external libraries. Moreover, the application will be configurable and can be extended in various ways; whenever some extension isn't permanent or will be used only on demand, it is packaged as a separate module into a plug-in. For example, the GUI of Lumiera -is a plugin. + +is a plugin. + -> design documents regarding the link:plugins/index.html[Plugins] diff --git a/doc/design/lowlevel/index.txt b/doc/design/lowlevel/index.txt index 4ad72ac4e..e27f8cfa0 100644 --- a/doc/design/lowlevel/index.txt +++ b/doc/design/lowlevel/index.txt @@ -11,7 +11,7 @@ data access. Within Lumiera, there are two main kinds of data handling: especially logging, replaying and ``Undo'' of all ongoing modifications.. * Media data is handled _frame wise_ -- as described below. -The vault layer (backend) uses *memory mapping* to make data available to the program. +The vault layer (``backend'') uses *memory mapping* to make data available to the program. This is somewhat different to the more common open/read/write/close file access, while giving superior performance and much better memory utilization. The Vault-Layer must be able to handle more data than will fit into the memory diff --git a/doc/devel/rfc/EngineInterfaceOverview.txt b/doc/devel/rfc/EngineInterfaceOverview.txt index 8dd20378e..41226318f 100644 --- a/doc/devel/rfc/EngineInterfaceOverview.txt +++ b/doc/devel/rfc/EngineInterfaceOverview.txt @@ -41,7 +41,7 @@ Render Process ~~~~~~~~~~~~~~ The render process brackets an ongoing calculation as a whole. It is not to be confused with a operating system process or thread; rather it is a point of -reference for the relevant entities in the GUI and Steam-Layer in need to +reference for the relevant entities in the Stage and Steam-Layer in need to connect to such a "rendering", and it holds the specific definitions for this calculation series. A render process _corresponds to a single data stream_ to be rendered. Thus, when the play diff --git a/doc/devel/rfc/LayerNames.txt b/doc/devel/rfc/LayerNames.txt index f9eb683d0..0dbfa0705 100644 --- a/doc/devel/rfc/LayerNames.txt +++ b/doc/devel/rfc/LayerNames.txt @@ -65,7 +65,7 @@ Tasks * update and add documentation ([green]#✔ done#) * adjust source folders and namespaces ([green]#✔ done#) * adjust build system and library names ([green]#✔ done#) - * fix textual usage in code and documentation [red yellow-background]#TBD# + * fix textual usage in code and documentation ([green]#✔ done#) Discussion diff --git a/doc/devel/rfc/ProcHighLevelModel.txt b/doc/devel/rfc/ProcHighLevelModel.txt index e1da79534..f253cf77b 100644 --- a/doc/devel/rfc/ProcHighLevelModel.txt +++ b/doc/devel/rfc/ProcHighLevelModel.txt @@ -201,9 +201,9 @@ Cons assumed that the builder will follow certain patterns and ignore non conforming parts * allows to create patterns which go beyond the abilities of current GUI - technology. Thus the interface to the GUI layer needs extra care an won't be - a simple as it could be with a more conventional approach. Also, the GUI - needs to be prepared that objects can move in response to some edit + technology. Thus the interface to the stage layer (GUI) needs extra care and + won't be a simple as it could be with a more conventional approach. Also, + the GUI needs to be prepared that objects can move in response to some edit operation. diff --git a/doc/devel/rfc/ProcPlacementMetaphor.txt b/doc/devel/rfc/ProcPlacementMetaphor.txt index 3f6ab4d4a..c89f5faa4 100644 --- a/doc/devel/rfc/ProcPlacementMetaphor.txt +++ b/doc/devel/rfc/ProcPlacementMetaphor.txt @@ -151,7 +151,7 @@ Use the conventional approach * layering and pan are hard wired additional properties * implement an additional auto-link macro facility to attach sound to video * implement a magnetic snap-to for attaching clips seamless after each other - * implement a splicing/sliding/shuffling mode in the gui + * implement a splicing/sliding/shuffling mode in the UI * provide a output wiring tool in the GUI * provide macro features for this and that.... diff --git a/doc/devel/rfc/SchedulerRequirements.txt b/doc/devel/rfc/SchedulerRequirements.txt index 3f12c91c4..dd729ff13 100644 --- a/doc/devel/rfc/SchedulerRequirements.txt +++ b/doc/devel/rfc/SchedulerRequirements.txt @@ -151,7 +151,7 @@ Alternatives We do not want (1), since it is tied to an obsolete hardware model and lacks the ability to be adapted to the new kinds of hardware available today or to be expected in near future. We do not want (2) since it essentially doesn't solve any problem, but rather pushes complexity into the -higher layers (Session, GUI), which are lacking the information about individual jobs and timing. +higher layers (Session, Stage), which are lacking the information about individual jobs and timing. diff --git a/doc/devel/rfc/ThreadsSignalsAndImportantManagementTasks.txt b/doc/devel/rfc/ThreadsSignalsAndImportantManagementTasks.txt index a6f32e9e1..a7f258622 100644 --- a/doc/devel/rfc/ThreadsSignalsAndImportantManagementTasks.txt +++ b/doc/devel/rfc/ThreadsSignalsAndImportantManagementTasks.txt @@ -62,7 +62,7 @@ comments about possible signals. SIGINT:: This is the CTRL-C case from terminal, in most cases this means that a user wants to break the application immediately. We trigger an - emergency shutdown. Recents actions are be logged already, so no work + emergency shutdown. Recent actions are be logged already, so no work gets lost, but no checkpoint in the log gets created so one has to explicitly recover the interrupted state. diff --git a/doc/index.txt b/doc/index.txt index 09374f06d..53421ae24 100644 --- a/doc/index.txt +++ b/doc/index.txt @@ -31,7 +31,7 @@ used as design notebook, featuring day-to-day design sketches, notes but also quite some more persistent planning. Finished documentation text is constantly moved over to the documentation section(s) of the Lumiera website. --> access the Proc-Layer link:{l}/wiki/renderengine.html[TiddlyWiki online here] +-> access the Development link:{l}/wiki/renderengine.html[TiddlyWiki online here] === API Documentation === We use the link:http://doxygen.org[Doxygen] tool to extract diff --git a/doc/technical/build/SCons.txt b/doc/technical/build/SCons.txt index 220cfa373..1ce955f35 100644 --- a/doc/technical/build/SCons.txt +++ b/doc/technical/build/SCons.txt @@ -100,9 +100,9 @@ library `liblumieracommon.so`, as is the collection of helper classes and suppor our 'support library' `liblumierasupport.so`. Besides, there is a sub-tree for core plug-ins and helper tools. .the GTK Gui -one of the sub-trees, residing in `src/gui` forms the _upper layer_ or _user-interaction layer_. Contrary to -the lower layers, the GUI is _optional_ and the application is fully operational _without Gui._ Thus, the -GTK Gui is built and loaded as Lumiera a plug-in. +one of the sub-trees, residing in `src/stage` forms the _upper layer_ or _user-interaction layer_. Contrary to +the lower layers, the Stage Layer (GUI) is _optional_ and the application is fully operational _without GUI._ +Thus, the GTK Gui is built and loaded as Lumiera a plug-in. .unit tests Since our development is test-driven, about half of the overall code can be found in unit- and integration @@ -122,7 +122,7 @@ There is a separate subtree for research and experiments. The rationale being to of the core application when it comes to experimenting and trying out new technologies. .icons and resources -the +data/+ subtree holds resources, configuration files and icons for the Gui. Most of our icons +the +data/+ subtree holds resources, configuration files and icons for the GUI. Most of our icons are defined as SVG graphics. The build process creates a helper executable (+rsvg_convert+) to render these vector graphics with the help of lib Cairo into icon collections of various sizes. @@ -154,7 +154,7 @@ prints a summary of all custom options, targets and toggles defined for our buil Targets ^^^^^^^ -- *build* is the default target: it creates the shared libs, the application, core plug-ins and the Gui. +- *build* is the default target: it creates the shared libs, the application, core plug-ins and the GUI. - *testcode* additionally builds the research and unit test code - *check* builds test code and runs our test-suites - *research* builds just the research tree diff --git a/doc/technical/code/linkingStructure.txt b/doc/technical/code/linkingStructure.txt index ac42bcc53..f9bd8355a 100644 --- a/doc/technical/code/linkingStructure.txt +++ b/doc/technical/code/linkingStructure.txt @@ -96,7 +96,7 @@ some system facilities, in case the need arises. It is desirable for headers to in a way independent of the include order. But in some, rare cases we need to rely on a specific order of include. In such cases, it is a good idea to encode this specific order right into some very fundamental header, so it gets fixed and settled early in the include -processing chain. Our 'gui/gtk-base.hpp', as used by 'gui/gtk-lumiera.hpp' is a good example. +processing chain. Our 'stage/gtk-base.hpp', as used by 'stage/gtk-lumiera.hpp' is a good example. Forward declarations ^^^^^^^^^^^^^^^^^^^^ @@ -351,7 +351,7 @@ actively step by step Gui.resourcepath:: the place where the GTK-UI looks for further resources, most notably... Gui.stylesheet:: the name of the CSS-stylesheet for GTK-3, which defines the - application specific look, link:{ldoc}/technical/gui/guiTheme.html[skinning and theme]. + application specific look, link:{ldoc}/technical/stage/guiTheme.html[skinning and theme]. While the first two steps, the relative locations `$ORIGIN/modules` and `$ORIGIN/setup.ini` are hard-wired, the further resolution steps rely on the contents of 'setup.ini' and are diff --git a/doc/technical/howto/IdeSetup.txt b/doc/technical/howto/IdeSetup.txt index 7017ad04d..9d52c4739 100644 --- a/doc/technical/howto/IdeSetup.txt +++ b/doc/technical/howto/IdeSetup.txt @@ -39,7 +39,7 @@ was able to see the right files with the right locations * visit the _Indexer_ tab and ensure the full indexer is enabled. Maybe change a setting and hit ``apply'' to force re-building of the index. Depending on your computer, this indexing might take quite some time initially * if the indexing process was onece interrupted by a crash or force shutdown of the IDE, the index database might - be corrupted. Try to remove it, either through the Gui, or try to locate the raw data and kill it while Eclipse + be corrupted. Try to remove it, either through the GUI, or try to locate the raw data and kill it while Eclipse is not running. + It is located in `/.metadata/.plugins/org.eclipse.cdt.core/`... * at some point in the past, I had problems with a lacking definiton of our own library diff --git a/doc/technical/infra/TestSupport.txt b/doc/technical/infra/TestSupport.txt index 6f06d758f..987d40262 100644 --- a/doc/technical/infra/TestSupport.txt +++ b/doc/technical/infra/TestSupport.txt @@ -241,7 +241,7 @@ The currently employed numbering scheme is as follows |20 |Higher level support library services |30 |Vault Layer Unit tests |40 |Steam Layer Unit tests -|50 |Stage Layer Unit tests (Gui, Scripting) +|50 |Stage Layer Unit tests (UI binding, Scripting) |60 |Component integration tests |70 |Functionality tests on the complete application |80 |Reported bugs which can be expressed in a test case diff --git a/doc/technical/overview.txt b/doc/technical/overview.txt index 1ca4bfadd..be7d362a6 100644 --- a/doc/technical/overview.txt +++ b/doc/technical/overview.txt @@ -138,7 +138,7 @@ they won't contain any relevant persistent state beyond presentation. _As of 2018, the one and only interface under active development is the Lumiera GTK GUI,_ based on GTK-3 / gtkmm. The sources are in tree -(directory 'src/gui') and it is integrated into the standard build and +(directory 'src/stage') and it is integrated into the standard build and installation process. By default, running the 'lumiera' executable will load and start this GUI as a Lumiera module from 'modules/gtk_gui.lum' @@ -513,7 +513,7 @@ and disjunction. Locking ~~~~~~~ General purpose Locking is based on object monitors. Performance critical code -in the backend uses mutexes, condition vars and rwlocks direcly. +in the Vault uses mutexes, condition vars and rwlocks direcly. Intentionally no semaphores. - C++ locks are managed by scoped automatic variables diff --git a/doc/technical/stage/CodePolicy.txt b/doc/technical/stage/CodePolicy.txt index b3fdc1371..ec68e1e2c 100644 --- a/doc/technical/stage/CodePolicy.txt +++ b/doc/technical/stage/CodePolicy.txt @@ -88,7 +88,7 @@ NOTE: `using namespace Gtk` and similar wildcard includes are prohibited. We are forced to observe a tricky include sequence, due to NoBug's `ERROR` macro, and also in order to get I18N right. Thus any actual translation unit should ensure that effectively the first include -is that of 'gui/gtk-base.hpp'.footnote:[you need not include it literally. It is +is that of 'stage/gtk-base.hpp'.footnote:[you need not include it literally. It is perfectly fine if you can be sure the first included header somehow drags in 'gtk-base.hpp' before any other header.] diff --git a/doc/technical/stage/GuiConnect.txt b/doc/technical/stage/GuiConnect.txt index c7a9d6221..cd6b189fc 100644 --- a/doc/technical/stage/GuiConnect.txt +++ b/doc/technical/stage/GuiConnect.txt @@ -1,7 +1,7 @@ GUI Connection ============== -The Lumiera application is built from several segregated software layers, and the user interface, the GUI-Layer is +The Lumiera application is built from several segregated software layers, and the user interface, the Stage-Layer is optional and loaded as a plug-in. This deliberate design decision has far reaching consequences in all parts of the application: Anything of relevance for editing or rendering need to be _represented explicitly_ within a model, distinct and abstracted from the presentation for the user. For example, many media handling applications let you ``snap'' some diff --git a/doc/technical/stage/GuiTheme.txt b/doc/technical/stage/GuiTheme.txt index 2fa13d845..f23b7a88d 100644 --- a/doc/technical/stage/GuiTheme.txt +++ b/doc/technical/stage/GuiTheme.txt @@ -22,7 +22,7 @@ into all lower caps, single word, no underscores. E.g. `Gtk::TextView` -> `textv Widgets may also expose CSS classes for styling -- the standard widgets define a generic set of https://developer.gnome.org/gtk3/3.4/GtkStyleContext.html#gtkstylecontext-classes[predefined CSS style classes], which can be used to establish the foundation for theming. Obviously it is preferable to keep styling rules as -concise, generic and systematic as possible; yet we may still refer to individual gui elements by name (`#ID`) though. +concise, generic and systematic as possible; yet we may still refer to individual GUI elements by name (`#ID`) though. Recommended reading ~~~~~~~~~~~~~~~~~~~ diff --git a/doc/technical/stage/index.txt b/doc/technical/stage/index.txt index e51ed8ee1..ad34a0085 100644 --- a/doc/technical/stage/index.txt +++ b/doc/technical/stage/index.txt @@ -1,5 +1,5 @@ -Technical Documentation: GUI -============================ +Technical Documentation: Stage Layer +==================================== Eventually, this will have technical documentation for the GUI. diff --git a/doc/technical/vault/ConfigGuide.txt b/doc/technical/vault/ConfigGuide.txt index b4b06e0d8..3e66ac9b6 100644 --- a/doc/technical/vault/ConfigGuide.txt +++ b/doc/technical/vault/ConfigGuide.txt @@ -95,7 +95,7 @@ File handling How many filehandles the backend shall use [approx 2/3 of all available] - backend.file.max_handles + vault.file.max_handles @@ -107,15 +107,15 @@ Defaults: 3GiB on 32 bit arch 192TiB on 64 bit arch - backend.mmap.as_limit + vault.mmap.as_limit Default start size for mmaping windows. 128MB on 32 bit arch 2GB on 64 bit arch - backend.mmap.window_size + vault.mmap.window_size How many memory mappings shall be established at most Default 60000 - backend.mmap.max_maps + vault.mmap.max_maps diff --git a/doc/user/intro/Glossary.txt b/doc/user/intro/Glossary.txt index 1a971e52b..7aaf4a9d2 100644 --- a/doc/user/intro/Glossary.txt +++ b/doc/user/intro/Glossary.txt @@ -34,9 +34,9 @@ NOTE: Draft, please help rephrase/review and shorten explanations! anchor:ControllerGui[] link:#ControllerGui[->]Controller Gui:: This can be either a full Software implementation for a Transport - control (Widgets for Start/Stop/Rev/Ffw etc) or some Gui managing an + control (Widgets for Start/Stop/Rev/Ffw etc) or some GUI managing an Input Device. They share some feature to attach them to controllable - gui-entities (Viewers, Timeline Views) + GUI-entities (Viewers, Timeline Views) anchor:Cursor[] link:#Cursor[->]Cursor:: Playback- or edit position diff --git a/doc/user/tutorials/building.txt b/doc/user/tutorials/building.txt index f450f081e..e09f2dcd7 100644 --- a/doc/user/tutorials/building.txt +++ b/doc/user/tutorials/building.txt @@ -5,7 +5,7 @@ Building Lumiera from source At the moment you can build Lumiera, start the standard Lumiera GUI and run the Lumiera test suite. The GUI you start might look like a mockup, but in fact is -_is_ the real application; just there isn't any wiring between GUI, model and +_is_ the real application; just there isn't much wiring between GUI, model and core yet. This will remain the state of affairs for the foreseeable future, since we're developing core components against a test suite with unit and integration tests. Thus, the growth of our test suite is the only visible indication of diff --git a/doc/user/tutorials/contributing.txt b/doc/user/tutorials/contributing.txt index a5f684180..75d7bbb09 100644 --- a/doc/user/tutorials/contributing.txt +++ b/doc/user/tutorials/contributing.txt @@ -245,7 +245,7 @@ fact, many user interfaces should be possible. * we need a concept for pen based handling -Proc Layer:: +Steam Layer:: - external connection systems * investigate good ways to output video, both in-window and full screen. @@ -280,9 +280,9 @@ Proc Layer:: ///////////////// -Backend:: +Vault:: -TODO: where to contribute to the backend?? +TODO: where to contribute to the backend / the vault?? ///////////////// diff --git a/src/common/config.h b/src/common/config.h index 82b6025ee..6c0ffbea1 100644 --- a/src/common/config.h +++ b/src/common/config.h @@ -36,7 +36,7 @@ ** @todo as of 2016, this code is not in any meaningful use ** ** @see lumiera::BasicSetup simple start-up configuration - ** @see http://lumiera.org/documentation/technical/backend/ConfigLoader.html ConfigLoader draft from 2008 + ** @see http://lumiera.org/documentation/technical/vault/ConfigLoader.html ConfigLoader draft from 2008 */ #ifndef COMMON_CONFIG_H diff --git a/src/include/gui-notification-facade.h b/src/include/gui-notification-facade.h index f58a81af1..ac095b6b3 100644 --- a/src/include/gui-notification-facade.h +++ b/src/include/gui-notification-facade.h @@ -27,7 +27,7 @@ ** changes in the session, which result in notification and structure change messages ** being pushed up asynchronously back into the UI. The GuiNotification interface ** abstracts this ability of the UI to receive such update messages. It is implemented - ** by the NotificationService within the GUI Layer, which causes actual tangible changes + ** by the NotificationService within the Stage-Layer, which causes actual tangible changes ** to happen in the UI in response to the reception of these messages. ** ** @see notification-service.hpp implementation diff --git a/src/include/logging.h b/src/include/logging.h index a83f82406..816007220 100644 --- a/src/include/logging.h +++ b/src/include/logging.h @@ -30,7 +30,7 @@ ** declarations are to be kept in one central location. Subsystems ** are free to define and use additional flags for local use. Typically, ** this header will be included via some of the basic headers like error.hpp, - ** which in turn gets included e.g. by proc/common.hpp + ** which in turn gets included e.g. by steam/common.hpp ** ** This header can thus be assumed to be effectively global. It should contain ** only declarations of global relevance, as any change causes the whole project @@ -124,27 +124,27 @@ NOBUG_CPP_DEFINE_FLAG_PARENT ( progress, logging); /** progress log for the main starter */ NOBUG_CPP_DEFINE_FLAG_PARENT ( main, progress); /** progress log for the vault layer */ -NOBUG_CPP_DEFINE_FLAG_PARENT ( backend, progress); -NOBUG_CPP_DEFINE_FLAG_PARENT ( file, backend); //opening/closing files etc -NOBUG_CPP_DEFINE_FLAG_PARENT ( mmap, backend); //mmap errors -NOBUG_CPP_DEFINE_FLAG_PARENT ( thread, backend); //starting/stopping threads +NOBUG_CPP_DEFINE_FLAG_PARENT ( vault, progress); +NOBUG_CPP_DEFINE_FLAG_PARENT ( file, vault); //opening/closing files etc +NOBUG_CPP_DEFINE_FLAG_PARENT ( mmap, vault); //mmap errors +NOBUG_CPP_DEFINE_FLAG_PARENT ( thread, vault); //starting/stopping threads NOBUG_CPP_DEFINE_FLAG_PARENT ( threads, thread); NOBUG_CPP_DEFINE_FLAG_PARENT ( threadpool, thread); -NOBUG_CPP_DEFINE_FLAG_PARENT ( fileheader, backend); +NOBUG_CPP_DEFINE_FLAG_PARENT ( fileheader, vault); /** progress log for the steam layer */ -NOBUG_CPP_DEFINE_FLAG_PARENT ( proc, progress); +NOBUG_CPP_DEFINE_FLAG_PARENT ( steam, progress); /** progress log for steam-layer command dispatch */ -NOBUG_CPP_DEFINE_FLAG_PARENT ( command, proc); +NOBUG_CPP_DEFINE_FLAG_PARENT ( command, steam); /** progress log for session datastructure */ -NOBUG_CPP_DEFINE_FLAG_PARENT ( session, proc); +NOBUG_CPP_DEFINE_FLAG_PARENT ( session, steam); /** progress log for the builder and build process */ -NOBUG_CPP_DEFINE_FLAG_PARENT ( builder, proc); +NOBUG_CPP_DEFINE_FLAG_PARENT ( builder, steam); /** progress log for running the engine */ -NOBUG_CPP_DEFINE_FLAG_PARENT ( engine, proc); +NOBUG_CPP_DEFINE_FLAG_PARENT ( engine, steam); /** progress log for play- and render subsystem */ -NOBUG_CPP_DEFINE_FLAG_PARENT ( play, proc); -/** progress log for the gui */ -NOBUG_CPP_DEFINE_FLAG_PARENT ( gui, progress); +NOBUG_CPP_DEFINE_FLAG_PARENT ( play, steam); +/** progress log for the stage layer (GUI) */ +NOBUG_CPP_DEFINE_FLAG_PARENT ( stage, progress); /** progress log for the support lib */ NOBUG_CPP_DEFINE_FLAG_PARENT ( library, progress); NOBUG_CPP_DEFINE_FLAG_PARENT ( resourcecollector, library); diff --git a/src/lib/time/digxel.hpp b/src/lib/time/digxel.hpp index 4fb3b645b..2ef8f214c 100644 --- a/src/lib/time/digxel.hpp +++ b/src/lib/time/digxel.hpp @@ -32,7 +32,7 @@ ** properties to support building such display widgets. It doesn't contain any GUI code, but ** can be wrapped up to build a custom widget. ** - ** \par properties of a "Digxel" + ** # properties of a "Digxel" ** ** Semantically, it's a number or number component. It holds an internal numeric representation ** and is implicitly convertible back to the underlying numeric type (usually int or double). @@ -53,7 +53,8 @@ ** this mutation functor should invoke some internal recalculations, maybe resulting in a new ** value being pushed to the Digxel for display. ** - ** \par configuration + ** # configuration + ** ** the Digxel template can be configured to some degree to adjust the stored numeric data ** and the actual format to be applied ** diff --git a/src/lib/time/quantiser.hpp b/src/lib/time/quantiser.hpp index 194986107..fc0464f89 100644 --- a/src/lib/time/quantiser.hpp +++ b/src/lib/time/quantiser.hpp @@ -136,7 +136,7 @@ namespace time { * Simple stand-alone Quantiser implementation based on a constant sized gird. * This is a self-contained quantiser implementation without any implicit referral * to the Lumiera session. As such it is suited for simplified unit testing. - * @warning real GUI and Steam-Layer code should always fetch a quantiser from the + * @warning real Stage and Steam-Layer code should always fetch a quantiser from the * Session, referring to a pre defined TimeGrid. Basically, the overall purpose of * the time-quantisation framework is to enforce such a link to a distinct time scale * and quantisation, so to prevent "wild and uncoordinated" rounding attempts. diff --git a/src/stage/ctrl/actions.hpp b/src/stage/ctrl/actions.hpp index 55719710c..21c882c6e 100644 --- a/src/stage/ctrl/actions.hpp +++ b/src/stage/ctrl/actions.hpp @@ -226,7 +226,7 @@ namespace ctrl { } catch(Glib::Error& ex) { - ERROR (gui, "Building menus failed: %s", ex.what().data()); + ERROR (stage, "Building menus failed: %s", ex.what().data()); throw error::Config(_Fmt("global menu definition rejected: %s") % ex.what()); } @@ -354,7 +354,7 @@ namespace ctrl { void unimplemented (const char* todo) { - WARN (gui, "%s is not yet implemented. So sorry.", todo); + WARN (stage, "%s is not yet implemented. So sorry.", todo); } diff --git a/src/stage/ctrl/core-service.hpp b/src/stage/ctrl/core-service.hpp index 1e8c056a3..3153543e4 100644 --- a/src/stage/ctrl/core-service.hpp +++ b/src/stage/ctrl/core-service.hpp @@ -141,7 +141,7 @@ namespace ctrl{ , uiBusBackbone_{*this} , stateRecorder_{*this} { - INFO (gui, "UI-Backbone operative."); + INFO (stage, "UI-Backbone operative."); } ~CoreService(); diff --git a/src/stage/ctrl/facade.hpp b/src/stage/ctrl/facade.hpp index 20682907b..fd87bf104 100644 --- a/src/stage/ctrl/facade.hpp +++ b/src/stage/ctrl/facade.hpp @@ -78,7 +78,7 @@ namespace ctrl { : notificationService_{bus.getAccessPoint(), manager} // opens the GuiNotificationService instance , displayService_{} // opens the DisplayService instance ////////TICKET #82 obsolete { - INFO (gui, "UI-Facade Interfaces activated."); + INFO (stage, "UI-Facade Interfaces activated."); } private: diff --git a/src/stage/ctrl/nexus.hpp b/src/stage/ctrl/nexus.hpp index 5698bd931..78df18ae5 100644 --- a/src/stage/ctrl/nexus.hpp +++ b/src/stage/ctrl/nexus.hpp @@ -186,7 +186,7 @@ namespace ctrl{ ~Nexus() { if (0 < size()) - ERROR (gui, "Some UI components are still connected to the backbone."); + ERROR (stage, "Some UI components are still connected to the backbone."); } }; diff --git a/src/stage/ctrl/playback-controller.cpp b/src/stage/ctrl/playback-controller.cpp index 6790137e3..d2e568399 100644 --- a/src/stage/ctrl/playback-controller.cpp +++ b/src/stage/ctrl/playback-controller.cpp @@ -83,7 +83,7 @@ namespace ctrl { } catch (lumiera::error::State& err) { - WARN (gui, "failed to start playback: %s" ,err.what()); + WARN (stage, "failed to start playback: %s" ,err.what()); lumiera_error(); playing_ = false; } diff --git a/src/stage/ctrl/ui-dispatcher.hpp b/src/stage/ctrl/ui-dispatcher.hpp index ad6cf3878..888f07bb2 100644 --- a/src/stage/ctrl/ui-dispatcher.hpp +++ b/src/stage/ctrl/ui-dispatcher.hpp @@ -96,7 +96,7 @@ namespace ctrl { { static _Fmt messageTemplate{"asynchronous UI response failed: %s (error flag was: %s)"}; string response{messageTemplate % problem % lumiera_error()}; - WARN (gui, "%s", response.c_str()); + WARN (stage, "%s", response.c_str()); return response; } } diff --git a/src/stage/dialog/render.cpp b/src/stage/dialog/render.cpp index ee3102e2b..1072f15d0 100644 --- a/src/stage/dialog/render.cpp +++ b/src/stage/dialog/render.cpp @@ -22,7 +22,7 @@ /** @file render.cpp - ** Implementation of gui:dialog::Render, which is a Dialog + ** Implementation of stage:dialog::Render, which is a Dialog ** to set up a renter process and define output name and format. */ @@ -102,9 +102,9 @@ namespace dialog { dialog.add_button (Gtk::Stock::SAVE, Gtk::RESPONSE_OK); int result = dialog.run(); - INFO (gui, "%d", result); + INFO (stage, "%d", result); if (result == RESPONSE_OK) - INFO(gui, "%s", "RESPONSE_OK"); + INFO(stage, "%s", "RESPONSE_OK"); } diff --git a/src/stage/gtk-lumiera.cpp b/src/stage/gtk-lumiera.cpp index 615337e5b..5bfeb1c37 100644 --- a/src/stage/gtk-lumiera.cpp +++ b/src/stage/gtk-lumiera.cpp @@ -81,7 +81,7 @@ namespace stage { /**************************************************************************//** - * Implement the necessary steps for actually making the Lumiera Gui available. + * Implement the necessary steps for actually making the Lumiera UI available. * Establish the UI backbone services and start up the GTK GUI main event loop. * @warning to ensure reliable invocation of the termination signal, * any members should be failsafe on initialisation @@ -152,7 +152,7 @@ namespace stage { catch(...) { if (!lumiera_error_peek()) - LUMIERA_ERROR_SET (gui, STATE, "unexpected error when starting the GUI thread"); + LUMIERA_ERROR_SET (stage, STATE, "unexpected error when starting the GUI thread"); return false; } // note: lumiera_error state remains set } diff --git a/src/stage/interact/interaction-director.cpp b/src/stage/interact/interaction-director.cpp index ca5b95d99..9259a2946 100644 --- a/src/stage/interact/interaction-director.cpp +++ b/src/stage/interact/interaction-director.cpp @@ -126,7 +126,7 @@ namespace interact { inline void unimplemented (const char* todo) { - WARN (gui, "%s is not yet implemented. So sorry.", todo); + WARN (stage, "%s is not yet implemented. So sorry.", todo); } } diff --git a/src/stage/model/clip.cpp b/src/stage/model/clip.cpp index c2192aba7..7c1c0c9de 100644 --- a/src/stage/model/clip.cpp +++ b/src/stage/model/clip.cpp @@ -21,7 +21,7 @@ * *****************************************************/ -/** @file gui/model/clip.cpp +/** @file stage/model/clip.cpp ** Preliminary UI-model: implementation of a Clip object as placeholder to ** base the GUI implementation on. ** @warning as of 2016 this UI model is known to be a temporary workaround diff --git a/src/stage/model/clip.hpp b/src/stage/model/clip.hpp index c6638eef5..9dc897b5e 100644 --- a/src/stage/model/clip.hpp +++ b/src/stage/model/clip.hpp @@ -20,7 +20,7 @@ */ -/** @file gui/model/clip.hpp +/** @file stage/model/clip.hpp ** Preliminary UI-model: a Proxy Clip object to base the GUI implementation on. ** Later this Clip object will be connected to the underlying model in Steam-Layer. ** @warning as of 2016 this UI model is known to be a temporary workaround diff --git a/src/stage/model/diagnostics.hpp b/src/stage/model/diagnostics.hpp index 9a97f9fa7..4ce6c5804 100644 --- a/src/stage/model/diagnostics.hpp +++ b/src/stage/model/diagnostics.hpp @@ -21,7 +21,7 @@ */ -/** @file gui/model/diagnostics.hpp +/** @file stage/model/diagnostics.hpp ** Service for diagnostics. ** This header defines the basics of... ** diff --git a/src/stage/model/group-track.hpp b/src/stage/model/group-track.hpp index 9ee2d5913..b9fba1d47 100644 --- a/src/stage/model/group-track.hpp +++ b/src/stage/model/group-track.hpp @@ -20,7 +20,7 @@ */ -/** @file gui/model/group-track.hpp +/** @file stage/model/group-track.hpp ** Preliminary UI-model: Definition of group track timeline objects. ** @warning as of 2016 this UI model is known to be a temporary workaround ** and will be replaced in entirety by UI-Bus and diff framework. diff --git a/src/stage/model/sequence.cpp b/src/stage/model/sequence.cpp index 57acbca87..dbdfa6782 100644 --- a/src/stage/model/sequence.cpp +++ b/src/stage/model/sequence.cpp @@ -21,7 +21,7 @@ * *****************************************************/ -/** @file gui/model/sequence.cpp +/** @file stage/model/sequence.cpp ** Preliminary UI-model: implementation of an editable sequence. ** @warning as of 2016 this UI model is known to be a temporary workaround ** and will be replaced in entirety by UI-Bus and diff framework. @@ -74,7 +74,7 @@ Sequence::populateDummySequence() // END TEST CODE - INFO(gui, "\n%s", print_branch().c_str()); + INFO(stage, "\n%s", print_branch().c_str()); } diff --git a/src/stage/model/sequence.hpp b/src/stage/model/sequence.hpp index 077cedc9d..2a7a483fa 100644 --- a/src/stage/model/sequence.hpp +++ b/src/stage/model/sequence.hpp @@ -20,7 +20,7 @@ */ -/** @file gui/model/sequence.hpp +/** @file stage/model/sequence.hpp ** Preliminary UI-model: representation of an editable sequence. ** @warning as of 2016 this UI model is known to be a temporary workaround ** and will be replaced in entirety by UI-Bus and diff framework. diff --git a/src/stage/model/w-link.hpp b/src/stage/model/w-link.hpp index 58c1b053c..203ab10a2 100644 --- a/src/stage/model/w-link.hpp +++ b/src/stage/model/w-link.hpp @@ -97,7 +97,7 @@ namespace model { try { this->clear(); } - ERROR_LOG_AND_IGNORE (gui, "Detaching managed WLink from Widget") + ERROR_LOG_AND_IGNORE (stage, "Detaching managed WLink from Widget") } WLink() noexcept : widget_{nullptr} @@ -233,7 +233,7 @@ namespace model { } catch (...) { - ERROR (gui, "Unknown exception while attaching WLink"); + ERROR (stage, "Unknown exception while attaching WLink"); throw error::External (_Fmt{"WLink could not attach to %s due to unidentified Problems"} % target); } } diff --git a/src/stage/notification-service.cpp b/src/stage/notification-service.cpp index 1547eae37..9fe9bc0f2 100644 --- a/src/stage/notification-service.cpp +++ b/src/stage/notification-service.cpp @@ -156,7 +156,7 @@ namespace stage { void NotificationService::triggerGuiShutdown (string const& cause) { - NOTICE (gui, "@GUI: shutdown triggered with explanation '%s'....", cStr(cause)); + NOTICE (stage, "@GUI: shutdown triggered with explanation '%s'....", cStr(cause)); displayInfo (NOTE_ERROR, cause); dispatch_->event ([this]() { @@ -180,7 +180,7 @@ namespace stage { ) , LUMIERA_INTERFACE_INLINE (brief, const char*, (LumieraInterface ifa), - { (void)ifa; return "GUI Interface: push state update and notification of events into the GUI"; } + { (void)ifa; return "Stage Interface: push state update and notification of events into the GUI"; } ) , LUMIERA_INTERFACE_INLINE (homepage, const char*, (LumieraInterface ifa), @@ -321,7 +321,7 @@ namespace stage { , uiManager_{uiManager} , serviceInstance_( LUMIERA_INTERFACE_REF (lumieraorg_GuiNotification, 0,lumieraorg_GuiNotificationService)) { - INFO (gui, "GuiNotification Facade opened."); + INFO (stage, "GuiNotification Facade opened."); } diff --git a/src/stage/output/xvdisplayer.cpp b/src/stage/output/xvdisplayer.cpp index 224f9e1e6..a96328be3 100644 --- a/src/stage/output/xvdisplayer.cpp +++ b/src/stage/output/xvdisplayer.cpp @@ -52,7 +52,7 @@ namespace output { REQUIRE(width > 0); REQUIRE(height > 0); - INFO(gui, "Trying XVideo at %d x %d", width, height); + INFO(stage, "Trying XVideo at %d x %d", width, height); imageWidth = width; imageHeight = height; @@ -69,11 +69,11 @@ namespace output { if (XvQueryAdaptors (display, window, &count, &adaptorInfo) == Success) { - INFO(gui, "XvQueryAdaptors count: %d", count); + INFO(stage, "XvQueryAdaptors count: %d", count); for (unsigned int n = 0; gotPort == false && n < count; ++n ) { // Diagnostics - INFO(gui, "%s, %lu, %lu", adaptorInfo[ n ].name, + INFO(stage, "%s, %lu, %lu", adaptorInfo[ n ].name, adaptorInfo[ n ].base_id, adaptorInfo[ n ].num_ports - 1); for ( unsigned int port = adaptorInfo[ n ].base_id; @@ -87,11 +87,11 @@ namespace output { list = XvListImageFormats( display, port, &formats ); - INFO(gui, "formats supported: %d", formats); + INFO(stage, "formats supported: %d", formats); for ( int i = 0; i < formats; i ++ ) { - INFO(gui, "0x%x (%c%c%c%c) %s", + INFO(stage, "0x%x (%c%c%c%c) %s", list[ i ].id, ( list[ i ].id ) & 0xff, ( list[ i ].id >> 8 ) & 0xff, @@ -124,11 +124,11 @@ namespace output { XvQueryEncodings( display, grabbedPort, &unum, &enc ); for ( unsigned int index = 0; index < unum; index ++ ) { - INFO (gui, "%d: %s, %ldx%ld rate = %d/%d", - index, enc->name, - enc->width, enc->height, - enc->rate.numerator, - enc->rate.denominator); + INFO (stage, "%d: %s, %ldx%ld rate = %d/%d" + , index, enc->name + , enc->width, enc->height + , enc->rate.numerator + , enc->rate.denominator); } XvAttribute *xvattr = XvQueryPortAttributes (display, grabbedPort, &num); @@ -140,13 +140,13 @@ namespace output { { Atom val_atom = XInternAtom( display, xvattr[k].name, False ); if (XvSetPortAttribute(display, grabbedPort, val_atom, 1 ) != Success ) - NOBUG_ERROR(gui, "Couldn't set Xv attribute %s\n", xvattr[k].name); + NOBUG_ERROR(stage, "Couldn't set Xv attribute %s\n", xvattr[k].name); } else if ( strcmp( xvattr[k].name, "XV_COLORKEY") == 0 ) { Atom val_atom = XInternAtom( display, xvattr[k].name, False ); if ( XvSetPortAttribute( display, grabbedPort, val_atom, 0x010102 ) != Success ) - NOBUG_ERROR(gui, "Couldn't set Xv attribute %s\n", xvattr[k].name); + NOBUG_ERROR(stage, "Couldn't set Xv attribute %s\n", xvattr[k].name); } } } @@ -190,7 +190,7 @@ namespace output { XvDisplayer::~XvDisplayer() { - NOBUG_ERROR(gui, "Destroying XV Displayer"); + NOBUG_ERROR(stage, "Destroying XV Displayer"); if ( gotPort ) { diff --git a/src/stage/ui-bus.cpp b/src/stage/ui-bus.cpp index c1136b4f1..84ecf04ca 100644 --- a/src/stage/ui-bus.cpp +++ b/src/stage/ui-bus.cpp @@ -128,7 +128,7 @@ namespace ctrl { * the UI. Here, at the UI-Bus interface, we're just interested * in the fact _that_ some command is to be bound and invoked. * This information is forwarded to the command receiver service, - * which in turn talks to the proc dispatcher. + * which in turn talks to the steam dispatcher. * @note no information regarding the _origin_ of this command invocation * is captured. If a command needs a _subject_, this has to be * bound as an command argument beforehand. diff --git a/src/stage/widget/timeline/timeline-body.cpp b/src/stage/widget/timeline/timeline-body.cpp index e9194f1b5..b01d452f8 100644 --- a/src/stage/widget/timeline/timeline-body.cpp +++ b/src/stage/widget/timeline/timeline-body.cpp @@ -75,7 +75,7 @@ TimelineBody::TimelineBody (TimelineWidget &timelineWidget) TimelineBody::~TimelineBody() { - WARN_IF(!tool, gui, "An invalid tool pointer is unexpected here"); + WARN_IF(!tool, stage, "An invalid tool pointer is unexpected here"); } TimelineViewWindow& diff --git a/src/stage/widget/timeline/timeline-track.cpp b/src/stage/widget/timeline/timeline-track.cpp index 6aab8e8da..582f2dfd4 100644 --- a/src/stage/widget/timeline/timeline-track.cpp +++ b/src/stage/widget/timeline/timeline-track.cpp @@ -244,7 +244,7 @@ namespace timeline { return EXPANDER_SEMI_COLLAPSED; } - NOBUG_ERROR(gui, "Track::get_expander_style() final return reached"); + NOBUG_ERROR(stage, "Track::get_expander_style() final return reached"); return EXPANDER_COLLAPSED; // This should never happen } diff --git a/src/stage/workspace/dock-area.cpp b/src/stage/workspace/dock-area.cpp index 9efd5ab44..a78d0fe92 100644 --- a/src/stage/workspace/dock-area.cpp +++ b/src/stage/workspace/dock-area.cpp @@ -222,7 +222,7 @@ namespace workspace { break; default: - ERROR(gui, "Unknown split_direction: %d", split_direction); + ERROR(stage, "Unknown split_direction: %d", split_direction); return; break; } @@ -281,7 +281,7 @@ namespace workspace { return i; } - ERROR (gui, "Unable to find a description with class name %s", class_name); + ERROR (stage, "Unable to find a description with class name %s", class_name); return -1; } @@ -342,7 +342,7 @@ namespace workspace { return i; } - ERROR(gui, "Unable to find a description with with this class type"); + ERROR(stage, "Unable to find a description with with this class type"); return -1; } diff --git a/src/stage/workspace/panel-manager.cpp b/src/stage/workspace/panel-manager.cpp index 983c3f22b..92593695d 100644 --- a/src/stage/workspace/panel-manager.cpp +++ b/src/stage/workspace/panel-manager.cpp @@ -214,7 +214,7 @@ namespace workspace { break; default: - ERROR(gui, "Unknown split_direction: %d", split_direction); + ERROR (stage, "Unknown split_direction: %d", split_direction); return; break; } @@ -273,7 +273,7 @@ namespace workspace { return i; } - ERROR (gui, "Unable to find a description with class name %s", class_name); + ERROR (stage, "Unable to find a description with class name %s", class_name); return -1; } @@ -334,7 +334,7 @@ namespace workspace { return i; } - ERROR(gui, "Unable to find a description with with this class type"); + ERROR (stage, "Unable to find a description with with this class type"); return -1; } diff --git a/src/stage/workspace/ui-style.cpp b/src/stage/workspace/ui-style.cpp index 5212261b6..6453bfd08 100644 --- a/src/stage/workspace/ui-style.cpp +++ b/src/stage/workspace/ui-style.cpp @@ -92,7 +92,7 @@ namespace workspace { } catch(Glib::Error const& failure) { - WARN(gui, "Failure while loading stylesheet '%s': %s", cStr(stylesheetName), cStr(failure.what())); + WARN (stage, "Failure while loading stylesheet '%s': %s", cStr(stylesheetName), cStr(failure.what())); } Gtk::StyleContext::add_provider_for_screen (screen, css_provider, @@ -122,7 +122,7 @@ namespace workspace { } else { - WARN(gui, "%s style value failed to load", property_name); + WARN (stage, "%s style value failed to load", property_name); pattern = Cairo::SolidPattern::create_rgb ( red, green, blue ); } @@ -194,7 +194,7 @@ namespace workspace { if(no_icons) { // No icons were loaded - ERROR (gui, "Unable to load icon '%s'", cStr(icon_name)); + ERROR (stage, "Unable to load icon '%s'", cStr(icon_name)); return false; } @@ -297,7 +297,7 @@ namespace workspace { catch(Glib::Exception const& ex) { - WARN (gui, "Failure when accessing icon '%s'. Problem: %s", cStr(path), cStr(ex.what())); + WARN (stage, "Failure when accessing icon '%s'. Problem: %s", cStr(path), cStr(ex.what())); return false; } } diff --git a/src/steam/common.hpp b/src/steam/common.hpp index 2c437f027..74ac6fdb8 100644 --- a/src/steam/common.hpp +++ b/src/steam/common.hpp @@ -22,7 +22,7 @@ */ -/** @file proc/common.hpp +/** @file steam/common.hpp ** Basic set of definitions and includes commonly used together. ** Including common.hpp gives you a common set of elementary declarations ** widely used within the C++ code of the Steam-Layer. Besides that, this @@ -126,6 +126,6 @@ namespace steam { } -} // proc +} //(End)namespace steam -#endif /*LUMIERA_H*/ +#endif /*STEAM_COMMON_H*/ diff --git a/src/steam/control/command-instance-manager.hpp b/src/steam/control/command-instance-manager.hpp index 276262cae..4f106060e 100644 --- a/src/steam/control/command-instance-manager.hpp +++ b/src/steam/control/command-instance-manager.hpp @@ -79,7 +79,7 @@ namespace control { /** * Maintain a _current command instance_ for parametrisation. - * The definition of a *Proc-Layer command* is used like a prototype. + * The definition of a *Steam-Layer command* is used like a prototype. * For invocation, an anonymous clone copy is created from the definition * by calling #newInstance. Several competing usages of the same command can be * kept apart with the help of the `invocationID`, which is used to decorate the basic diff --git a/src/steam/control/command-invocation.hpp b/src/steam/control/command-invocation.hpp index c250306b9..2ec9e3811 100644 --- a/src/steam/control/command-invocation.hpp +++ b/src/steam/control/command-invocation.hpp @@ -61,7 +61,7 @@ namespace control { - namespace com { ///< Proc-Layer command implementation details + namespace com { ///< Steam-Layer command implementation details // define transient invoker objects, to allow for arbitrary bindings template diff --git a/src/steam/control/session-command-service.cpp b/src/steam/control/session-command-service.cpp index 252132807..a461d3d73 100644 --- a/src/steam/control/session-command-service.cpp +++ b/src/steam/control/session-command-service.cpp @@ -246,7 +246,7 @@ namespace control { , instanceManager_{dispatcher_} , serviceInstance_{ LUMIERA_INTERFACE_REF (lumieraorg_SessionCommand, 0, lumieraorg_SessionCommandService)} { - INFO (gui, "SessionCommand Facade opened."); + INFO (stage, "SessionCommand Facade opened."); } diff --git a/src/steam/control/session-command-service.hpp b/src/steam/control/session-command-service.hpp index ac9661b31..6934fb978 100644 --- a/src/steam/control/session-command-service.hpp +++ b/src/steam/control/session-command-service.hpp @@ -33,7 +33,7 @@ ** used to _provide_ this service, not to access it. ** ** @see session-command-facade.h - ** @see facade.hpp subsystems for the Proc-Layer + ** @see facade.hpp subsystems for the Steam-Layer ** @see guifacade.cpp starting this service */ diff --git a/src/steam/engine/worker/dummy-tick.hpp b/src/steam/engine/worker/dummy-tick.hpp index 1a3c43d62..14910ea6c 100644 --- a/src/steam/engine/worker/dummy-tick.hpp +++ b/src/steam/engine/worker/dummy-tick.hpp @@ -27,7 +27,7 @@ ** drive the frame "creation" of a player dummy (the render engine is not ** ready yet). The intention is to use this service as part of a mock engine ** setup, used to verify the construction of engine components. As an integration - ** test, we build a "dummy player", delivering some test data frames to the Gui. + ** test, we build a "dummy player", delivering some test data frames to the GUI. ** ** @see steam::play::DummyPlayerService ** @@ -72,7 +72,7 @@ namespace node { , bind (&DummyTick::timerLoop, this, callback) ) { - INFO (proc, "TickService started."); + INFO (steam, "TickService started."); } ~DummyTick () @@ -81,7 +81,7 @@ namespace node { this->join(); usleep (200000); // additional delay allowing GTK to dispatch the last output - INFO (proc, "TickService shutdown."); + INFO (steam, "TickService shutdown."); } diff --git a/src/steam/engine/worker/tick-service.hpp b/src/steam/engine/worker/tick-service.hpp index 66bfe0096..998635fd1 100644 --- a/src/steam/engine/worker/tick-service.hpp +++ b/src/steam/engine/worker/tick-service.hpp @@ -27,7 +27,7 @@ ** drive the frame "creation" of a player dummy (the render engine is not ** ready yet). The intention is to use this service as part of a mock engine ** setup, used to verify the construction of engine components. As an integration - ** test, we build a "dummy player", delivering some test data frames to the Gui. + ** test, we build a "dummy player", delivering some test data frames to the GUI. ** ** @see steam::play::DummyPlayerService ** @@ -74,7 +74,7 @@ namespace node { , bind (&TickService::timerLoop, this, callback) ) { - INFO (proc, "TickService started."); + INFO (steam, "TickService started."); } ~TickService () @@ -83,7 +83,7 @@ namespace node { this->join(); usleep (200000); // additional delay allowing GTK to dispatch the last output - INFO (proc, "TickService shutdown."); + INFO (steam, "TickService shutdown."); } diff --git a/src/vault/backend.c b/src/vault/backend.c index 0c1b19181..17f68cdde 100644 --- a/src/vault/backend.c +++ b/src/vault/backend.c @@ -90,20 +90,20 @@ lumiera_backend_init (void) const char* filehandles = lumiera_tmpbuf_snprintf (SIZE_MAX, - "backend.file.max_handles = %d", - /* roughly 2/3 of all available filehandles are managed by the backend */ + "vault.file.max_handles = %d", + /* roughly 2/3 of all available filehandles are managed by the Lumiera Vault */ (sysconf (_SC_OPEN_MAX)-10)*2/3); lumiera_config_setdefault (filehandles); long long max_entries; - lumiera_config_number_get ("backend.file.max_handles", &max_entries); + lumiera_config_number_get ("vault.file.max_handles", &max_entries); lumiera_filehandlecache_new (max_entries); #if SIZE_MAX <= 4294967295UL - lumiera_config_setdefault ("backend.mmap.as_limit = 3221225469"); + lumiera_config_setdefault ("vault.mmap.as_limit = 3221225469"); #else - lumiera_config_setdefault ("backend.mmap.as_limit = 211106232532992"); + lumiera_config_setdefault ("vault.mmap.as_limit = 211106232532992"); #endif struct rlimit as_rlimit; @@ -112,11 +112,11 @@ lumiera_backend_init (void) long long as_limit = (long long)as_rlimit.rlim_cur; if (as_rlimit.rlim_cur == RLIM_INFINITY) { - lumiera_config_number_get ("backend.mmap.as_limit", &as_limit); + lumiera_config_number_get ("vault.mmap.as_limit", &as_limit); } else { - INFO (backend, "address space limited to %luMiB", as_rlimit.rlim_cur/1024/1024); + INFO (vault, "address space limited to %luMiB", as_rlimit.rlim_cur/1024/1024); } lumiera_mmapcache_new (as_limit); diff --git a/src/vault/filedescriptorregistry.h b/src/vault/filedescriptorregistry.h index 095f0ef11..c21bbeb46 100644 --- a/src/vault/filedescriptorregistry.h +++ b/src/vault/filedescriptorregistry.h @@ -37,7 +37,7 @@ * Initialise the global file descriptor registry. * Opening hard linked files will be targeted to the same file descriptor. * This function never fails but dies on error. - * @todo proper backend/subsystem failure + * @todo proper vault/subsystem failure */ void lumiera_filedescriptorregistry_init (void); diff --git a/src/vault/fileheader.c b/src/vault/fileheader.c index 8fa9a26f4..539c414c1 100644 --- a/src/vault/fileheader.c +++ b/src/vault/fileheader.c @@ -22,8 +22,7 @@ /** @file fileheader.c - ** Implementation of a common header format for working data files created - ** by the lumiera backend. + ** Implementation of a common header format for working data files created by the Lumiera Vault. ** @todo development in this area is stalled since 2010 */ diff --git a/src/vault/mmap.c b/src/vault/mmap.c index da38c7e83..1f15b3054 100644 --- a/src/vault/mmap.c +++ b/src/vault/mmap.c @@ -76,13 +76,13 @@ lumiera_mmap_init (LumieraMMap self, LumieraFile file, off_t start, size_t size) */ TODO("move the setdefaults somewhere else, backend_defaults.c or so"); #if SIZE_MAX <= 4294967295U - lumiera_config_setdefault ("backend.mmap.window_size = 134217728"); + lumiera_config_setdefault ("vault.mmap.window_size = 134217728"); #else - lumiera_config_setdefault ("backend.mmap.window_size = 2147483648"); + lumiera_config_setdefault ("vault.mmap.window_size = 2147483648"); #endif long long mmap_window_size = 0; - lumiera_config_number_get ("backend.mmap.window_size", &mmap_window_size); + lumiera_config_number_get ("vault.mmap.window_size", &mmap_window_size); LumieraFiledescriptor descriptor = file->descriptor; diff --git a/src/vault/netnodefacade.hpp b/src/vault/netnodefacade.hpp index 90fe60909..000d7b420 100644 --- a/src/vault/netnodefacade.hpp +++ b/src/vault/netnodefacade.hpp @@ -44,7 +44,7 @@ namespace vault { * Interface to the vault layer (renderfarm node): * Global access point for starting a server listening on a TCP port * and accepting render tasks. Possibly such a server could also - * use the backend file/media access functions to provide a media + * use the Vault file/media access functions to provide a media * data access service. * * @todo define the services provided by such a node. diff --git a/tests/51-gui-model.tests b/tests/51-gui-model.tests index 01bf86b76..5af25ff0a 100644 --- a/tests/51-gui-model.tests +++ b/tests/51-gui-model.tests @@ -1,4 +1,4 @@ -TESTING "Component Test Suite: GUI Model Parts" ./test-suite --group=gui +TESTING "Component Test Suite: GUI Model Parts" ./test-suite --group=stage diff --git a/tests/52-gui-control.tests b/tests/52-gui-control.tests index 550c7106b..f8e8e8ae2 100644 --- a/tests/52-gui-control.tests +++ b/tests/52-gui-control.tests @@ -1,4 +1,4 @@ -TESTING "Component Test Suite: GUI Control Facilities" ./test-suite --group=gui +TESTING "Component Test Suite: GUI Control Facilities" ./test-suite --group=stage diff --git a/tests/SConscript b/tests/SConscript index 43596dab4..32b251660 100644 --- a/tests/SConscript +++ b/tests/SConscript @@ -49,7 +49,7 @@ def testCases(env,dir): env = env.Clone() env.Append(CPPPATH=dir) # add subdir to Includepath if dir.startswith('stage'): - # additional libs used from gui tests + # additional libs used from GUI tests env.mergeConf(['sigc++-2.0']) # pick up all test classes and link them shared diff --git a/tests/core/steam/mobject/session/testclip.cpp b/tests/core/steam/mobject/session/testclip.cpp index ac838e23c..50a94a64a 100644 --- a/tests/core/steam/mobject/session/testclip.cpp +++ b/tests/core/steam/mobject/session/testclip.cpp @@ -53,7 +53,7 @@ namespace test { asset::Media & createTestMedia () { - // install Mock-Interface to Lumiera backend + // install Mock-Interface to Lumiera Vault MediaAccessMock useMockMedia; return *asset::Media::create("test-2", VIDEO); // query magic filename diff --git a/tests/library/path-array-test.cpp b/tests/library/path-array-test.cpp index 5656efb09..24b8dad8c 100644 --- a/tests/library/path-array-test.cpp +++ b/tests/library/path-array-test.cpp @@ -500,7 +500,7 @@ namespace test { /** Register this test class... */ - LAUNCHER (PathArray_test, "unit gui"); + LAUNCHER (PathArray_test, "unit stage"); }} // namespace lib::test diff --git a/tests/stage/abstract-tangible-test.cpp b/tests/stage/abstract-tangible-test.cpp index 87afc3b79..42ef4e2f9 100644 --- a/tests/stage/abstract-tangible-test.cpp +++ b/tests/stage/abstract-tangible-test.cpp @@ -757,7 +757,7 @@ namespace test { /** Register this test class... */ - LAUNCHER (AbstractTangible_test, "unit gui"); + LAUNCHER (AbstractTangible_test, "unit stage"); }}} // namespace stage::model::test diff --git a/tests/stage/bus-term-test.cpp b/tests/stage/bus-term-test.cpp index f92f80463..9732d7722 100644 --- a/tests/stage/bus-term-test.cpp +++ b/tests/stage/bus-term-test.cpp @@ -788,7 +788,7 @@ namespace test { /** Register this test class... */ - LAUNCHER (BusTerm_test, "unit gui"); + LAUNCHER (BusTerm_test, "unit stage"); }}} // namespace stage::model::test diff --git a/tests/stage/ctrl/state-map-grouping-storage-test.cpp b/tests/stage/ctrl/state-map-grouping-storage-test.cpp index 045058045..ef5f77a9e 100644 --- a/tests/stage/ctrl/state-map-grouping-storage-test.cpp +++ b/tests/stage/ctrl/state-map-grouping-storage-test.cpp @@ -159,7 +159,7 @@ namespace test { /** Register this test class... */ - LAUNCHER (StateMapGroupingStorage_test, "unit gui"); + LAUNCHER (StateMapGroupingStorage_test, "unit stage"); }}} // namespace stage::ctrl::test diff --git a/tests/stage/interact/cmd-context-test.cpp b/tests/stage/interact/cmd-context-test.cpp index e00c592a8..2003a7cd8 100644 --- a/tests/stage/interact/cmd-context-test.cpp +++ b/tests/stage/interact/cmd-context-test.cpp @@ -71,7 +71,7 @@ namespace test { /** Register this test class... */ - LAUNCHER (CmdContext_test, "unit gui"); + LAUNCHER (CmdContext_test, "unit stage"); }}} // namespace stage::interact::test diff --git a/tests/stage/interact/ui-coord-resolver-test.cpp b/tests/stage/interact/ui-coord-resolver-test.cpp index e5de14fe6..65dd502a4 100644 --- a/tests/stage/interact/ui-coord-resolver-test.cpp +++ b/tests/stage/interact/ui-coord-resolver-test.cpp @@ -749,7 +749,7 @@ namespace test { /** Register this test class... */ - LAUNCHER (UICoordResolver_test, "unit gui"); + LAUNCHER (UICoordResolver_test, "unit stage"); }}} // namespace stage::interact::test diff --git a/tests/stage/interact/ui-coord-test.cpp b/tests/stage/interact/ui-coord-test.cpp index f8413ea94..2dc9b7f06 100644 --- a/tests/stage/interact/ui-coord-test.cpp +++ b/tests/stage/interact/ui-coord-test.cpp @@ -452,7 +452,7 @@ namespace test { /** Register this test class... */ - LAUNCHER (UICoord_test, "unit gui"); + LAUNCHER (UICoord_test, "unit stage"); }}} // namespace stage::interact::test diff --git a/tests/stage/interact/ui-location-solver-test.cpp b/tests/stage/interact/ui-location-solver-test.cpp index 9979e8609..cd47a3359 100644 --- a/tests/stage/interact/ui-location-solver-test.cpp +++ b/tests/stage/interact/ui-location-solver-test.cpp @@ -481,7 +481,7 @@ namespace test { /** Register this test class... */ - LAUNCHER (UILocationSolver_test, "unit gui"); + LAUNCHER (UILocationSolver_test, "unit stage"); }}} // namespace stage::interact::test diff --git a/tests/stage/interact/view-spec-dsl-test.cpp b/tests/stage/interact/view-spec-dsl-test.cpp index f9a6f6b72..48d3b5fdc 100644 --- a/tests/stage/interact/view-spec-dsl-test.cpp +++ b/tests/stage/interact/view-spec-dsl-test.cpp @@ -228,7 +228,7 @@ namespace test { /** Register this test class... */ - LAUNCHER (ViewSpecDSL_test, "unit gui"); + LAUNCHER (ViewSpecDSL_test, "unit stage"); }}} // namespace stage::interact::test diff --git a/tests/stage/model/element-access-test.cpp b/tests/stage/model/element-access-test.cpp index afb490ba9..56098f35c 100644 --- a/tests/stage/model/element-access-test.cpp +++ b/tests/stage/model/element-access-test.cpp @@ -157,7 +157,7 @@ namespace test { /** Register this test class... */ - LAUNCHER (ElementAccess_test, "unit gui"); + LAUNCHER (ElementAccess_test, "unit stage"); }}} // namespace stage::model::test diff --git a/tests/stage/model/w-link-test.cpp b/tests/stage/model/w-link-test.cpp index 5a6d63637..d59595982 100644 --- a/tests/stage/model/w-link-test.cpp +++ b/tests/stage/model/w-link-test.cpp @@ -213,7 +213,7 @@ namespace test { /** Register this test class... */ - LAUNCHER (WLink_test, "unit gui"); + LAUNCHER (WLink_test, "unit stage"); }}} // namespace stage::model::test diff --git a/tests/stage/session-structure-mapping-test.cpp b/tests/stage/session-structure-mapping-test.cpp index dd40771c8..dfd5362a0 100644 --- a/tests/stage/session-structure-mapping-test.cpp +++ b/tests/stage/session-structure-mapping-test.cpp @@ -111,7 +111,7 @@ namespace test { /** Register this test class... */ - LAUNCHER (SessionStructureMapping_test, "unit gui"); + LAUNCHER (SessionStructureMapping_test, "unit stage"); }}} // namespace stage::model::test diff --git a/tests/stage/test-gui-test.cpp b/tests/stage/test-gui-test.cpp index ac2e73bc7..5c769db00 100644 --- a/tests/stage/test-gui-test.cpp +++ b/tests/stage/test-gui-test.cpp @@ -54,7 +54,7 @@ namespace test{ /** Register this test class to be invoked in some test groups */ - LAUNCHER (TestGui_test, "unit gui"); + LAUNCHER (TestGui_test, "unit stage"); diff --git a/tests/vault/test-resourcecollector.c b/tests/vault/test-resourcecollector.c index 992ab09fa..ab533380e 100644 --- a/tests/vault/test-resourcecollector.c +++ b/tests/vault/test-resourcecollector.c @@ -21,7 +21,7 @@ * *****************************************************/ /** @file test-resourcecollector.c - ** C unit test to cover management of low-level resources for the backend + ** C unit test to cover management of low-level resources for the Vault ** @see resourcecollector.h */ diff --git a/tests/vault/thread-wrapper-join-test.cpp b/tests/vault/thread-wrapper-join-test.cpp index efa6565ff..0dc481a81 100644 --- a/tests/vault/thread-wrapper-join-test.cpp +++ b/tests/vault/thread-wrapper-join-test.cpp @@ -51,7 +51,7 @@ namespace test { /***********************************************************************//** - * @test use the Lumiera backend to create some new threads, additionally + * @test use the Lumiera Vault to create some new threads, additionally * synchronising with these child threads and waiting for termination. * * @see vault::Thread diff --git a/tests/vault/thread-wrapper-test.cpp b/tests/vault/thread-wrapper-test.cpp index 38f55e7b3..9079cdb5a 100644 --- a/tests/vault/thread-wrapper-test.cpp +++ b/tests/vault/thread-wrapper-test.cpp @@ -117,7 +117,7 @@ namespace vault { /**********************************************************************//** - * @test use the Lumiera backend to create some new threads, utilising the + * @test use the Lumiera Vault to create some new threads, utilising the * lumiera::Thread wrapper for binding to an arbitrary operation * and passing the appropriate context. * diff --git a/wiki/renderengine.html b/wiki/renderengine.html index 5b8c43ba8..bdb66f463 100644 --- a/wiki/renderengine.html +++ b/wiki/renderengine.html @@ -922,7 +922,7 @@ Conceptually, assets belong to the [[global or root scope|ModelRootMO]] of the s
Conceptually, Assets and ~MObjects represent different views onto the same entities. Assets focus on bookkeeping of the contents, while the media objects allow manipulation and EditingOperations. Usually, on the implementation side, such closely linked dual views require careful consideration.
 
 !redundancy
-Obviously there is the danger of getting each entity twice, as Asset and as ~MObject. While such dual entities could be OK in conjunction with much specialised processing, in the case of Lumiera's Proc-Layer most of the functionality is shifted to naming schemes, configuration and generic processing, leaving the actual objects almost empty and deprived of distinguishing properties. Thus, starting out from the required concepts, an attempt was made to join, reduce and straighten the design.
+Obviously there is the danger of getting each entity twice, as Asset and as ~MObject. While such dual entities could be OK in conjunction with much specialised processing, in the case of Lumiera's Steam-Layer most of the functionality is shifted to naming schemes, configuration and generic processing, leaving the actual objects almost empty and deprived of distinguishing properties. Thus, starting out from the required concepts, an attempt was made to join, reduce and straighten the design.
 * type and channel configuration is concentrated to MediaAsset
 * the accounting of structural elements in the model is done through StructAsset
 * the object instance handling is done in a generic fashion by using placements and object references
@@ -1275,7 +1275,7 @@ __see also__
 
-
The invocation of individual [[render nodes|ProcNode]] uses an ''buffer table'' internal helper data structure to encapsulate technical details of the allocation, use, re-use and feeing of data buffers for the media calculations. Here, the management of the physical data buffers is delegated through a BufferProvider, which typically is implemented relying on the ''frame cache'' in the backend. Yet some partially quite involved technical details need to be settled for each invocation: We need input buffers, maybe provided as external input, while in other cases to be filled by a recursive call. We need storage to prepare the (possibly automated) parameters, and finally we need a set of output buffers. All of these buffers and parameters need to be rearranged for invoking the (external) processing function, followed by releasing the input buffers and commiting the output buffers to be used as result.
+
The invocation of individual [[render nodes|ProcNode]] uses an ''buffer table'' internal helper data structure to encapsulate technical details of the allocation, use, re-use and feeing of data buffers for the media calculations. Here, the management of the physical data buffers is delegated through a BufferProvider, which typically is implemented relying on the ''frame cache'' in the Vault. Yet some partially quite involved technical details need to be settled for each invocation: We need input buffers, maybe provided as external input, while in other cases to be filled by a recursive call. We need storage to prepare the (possibly automated) parameters, and finally we need a set of output buffers. All of these buffers and parameters need to be rearranged for invoking the (external) processing function, followed by releasing the input buffers and commiting the output buffers to be used as result.
 
 Because there are several flavours of node wiring, the building blocks comprising such a node invocation will be combined depending on the circumstances. Performing all these various steps is indeed the core concern of the render node -- with the help of BufferTable to deal with the repetitive, tedious and technical details.
 
@@ -1315,7 +1315,7 @@ From this backbone, the actual [[building mechanism|BuilderMechanics]] proceeds
 The building itself will be broken down into several small tool application steps. Each of these steps has to be mapped to the MObjects found on the [[Timeline]]. Remember: the idea is that the so called "[[Fixture]]" contains only [[ExplicitPlacement]]s which in turn link to MObjects like Clips, Effects and [[Automation]]. So it is sufficient to traverse this list and map the build tools to the elements. Each of these build tools has its own state, which serves to build up the resulting Render Engine. So far I see two steps to be necessary:
 * find the "Segments", i.e. the locations where the overall configuration changes
 * for each segment: generate a ProcNode for each found MObject and wire them accordingly
-Note, //we still have to work out how exactly building, rendering and playback work// together with the backend-design. The build process as such doesn't overly depend on these decisions. It is easy to reconfigure this process. For example, it would be possible as well to build for each frame separately (as Cinelerra2 does), or to build one segment covering the whole timeline (and handle everything via [[Automation]]
+Note, //we still have to work out how exactly building, rendering and playback work// together with the Vault-design. The build process as such doesn't overly depend on these decisions. It is easy to reconfigure this process. For example, it would be possible as well to build for each frame separately (as Cinelerra2 does), or to build one segment covering the whole timeline (and handle everything via [[Automation]]
 
 &rarr;see also: [[Builder Overview|Builder]]
 &rarr;see also: BasicBuildingOperations
@@ -1341,7 +1341,7 @@ Its a good idea to distinguish clearly between those concepts. A plugin is a pie
 !!!node interfaces
 As a consequence of this distinctions, in conjunction with a processing node, we have to deal with three different interfaces
 * the __build interface__ is used by the builder to set up and wire the nodes. It can be full blown C++ (including templates)
-* the __operation interface__ is used to run the calculations, which happens in cooperation of Proc-Layer and Backend. So a function-style interface is preferable.
+* the __operation interface__ is used to run the calculations, which happens in cooperation of Steam-Layer and Vault. So a function-style interface is preferable.
 * the __inward interface__ is accessed by the processing function in the course of the calculations to get at the necessary context, including in/out buffers and param values.
 
 !!!wiring data connections
@@ -1352,14 +1352,14 @@ With regard to the build process, the wiring of data connections translates into
 In many cases, the parameter values provided by these connections aren't frame based data, rather, the processing function needs a call interface to get the current value (value for a given time), which is provided by the parameter object. Here, the wiring needs to link to the suitable parameter instance, which is located within the high-level model (!). As an additional complication, calculating the actual parameter value may require a context data frame (typically for caching purposes to speed up the interpolation). While these parameter context data frames are completely opaque for the render node, they have to be passed in and out similar to the state needed by the node itself, and the wiring has to prepare for accessing these frames too.
 
-
+
The Builder takes some MObject/[[Placement]] information (called Timeline) and generates out of this a Render Engine configuration able to render this Objects. It does all decisions and retrieves the current configuration of all objects and plugins, so the Render Engine can just process them stright forward.
 
 The Builder is the central part of the [[Builder Pattern|http://en.wikipedia.org/wiki/Builder_pattern]]
 <br/>
 As the builder [[has to create a render node network|BuilderModelRelation]] implementing most of the features and wiring possible with the various MObject kinds and placement types, it is a rather complicated piece of software. In order to keep it manageable, it is broken down into several specialized sub components:
 * clients access builder functionality via the BuilderFacade
-* the [[Proc-Layer-Controller|Controller]] initiates the BuildProcess and does the overall coordination of scheduling edit operations, rebuilding the fixture and triggering the Builder
+* the [[Steam-Layer-Controller|SteamDispatcher]] initiates the BuildProcess and does the overall coordination of scheduling edit operations, rebuilding the fixture and triggering the Builder
 * to carry out the building, we use several primary tools (SegmentationTool, NodeCreatorTool,...),  together with a BuilderToolKit to be supplied by the [[tool factory|BuilderToolFactory]]
 * //operating the Builder// can be viewed at from two different angles, either emphasizing the [[basic building operations|BasicBuildingOperations]] employed to assemble the render node network, or focussing rather at the [[mechanics|BuilderMechanics]] of cooperating parts while processing.
 * besides, we can identify a small set of elementary situations we call [[builder primitives|BuilderPrimitives]], to be covered by the mentioned BuilderToolKit; by virtue of [[processing patterns|ProcPatt]] they form an [[interface to the rule based configuration|BuilderRulesInterface]].
@@ -1551,8 +1551,8 @@ TertiaryDark: #667
 Error: #f88
-
Within Proc-Layer, a Command is the abstract representation of a single operation or a compound of operations mutating the HighLevelModel.
-Thus, each command is a ''Functor'' and a ''Closure'' ([[command pattern|http://en.wikipedia.org/wiki/Command_pattern]]), allowing commands to be treated uniformly, enqueued in a [[dispatcher|ProcDispatcher]], logged to the SessionStorage and registered with the UndoManager.
+
Within Steam-Layer, a Command is the abstract representation of a single operation or a compound of operations mutating the HighLevelModel.
+Thus, each command is a ''Functor'' and a ''Closure'' ([[command pattern|http://en.wikipedia.org/wiki/Command_pattern]]), allowing commands to be treated uniformly, enqueued in a [[dispatcher|SteamDispatcher]], logged to the SessionStorage and registered with the UndoManager.
 
 Commands are //defined// using a [[fluent API|http://en.wikipedia.org/wiki/Fluent_interface]], just by providing apropriate functions. Additionally, the Closure necessary for executing a command is built by binding to a set of concrete parameters. After reaching this point, the state of the internal representation could be serialised by plain-C function calls, which is important for integration with the SessionStorage.
 
@@ -1611,7 +1611,7 @@ While obviously the first solution looks elegant and is much simpler to implemen
 While the usual »Memento« implementation might automatically capture the whole model (resulting in a lot of data to be stored and some uncertainty about the scope of the model to be captured), in Lumiera we rely instead on the client code to provide a ''capture function''&nbsp;and a ''playback function'' alongside with the actual operation. To help with this task, we provide a set of standard handlers for common situations. This way, operations might capture very specific information, might provide an "intelligent undo" to restore a given semantic instead of just a fixed value -- and moreover the client code is free actually to employ the "inverse operation" model in special cases where it just makes more sense than capturing state.
 
 !Handling of commands
-A command may be [[defined|CommandDefinition]] completely from scratch, or it might just serve as a CommandPrototype with specific targets and parameters. The command could then be serialised and later be recovered and re-bound with the parameters, but usually it will be handed over to the ProcDispatcher, pending execution. When ''invoking'', the handling sequence is to [[log the command|SessionStorage]], then call the ''undo capture function'', followed from calling the actual ''operation function''. After success, the logging and [[undo registration|UndoManager]] is completed. In any case, finally the ''result signal'' (a functor previously stored within the command) is emitted. {{red{10/09 WIP: not clear if we indeed implement this concept}}}
+A command may be [[defined|CommandDefinition]] completely from scratch, or it might just serve as a CommandPrototype with specific targets and parameters. The command could then be serialised and later be recovered and re-bound with the parameters, but usually it will be handed over to the SteamDispatcher, pending execution. When ''invoking'', the handling sequence is to [[log the command|SessionStorage]], then call the ''undo capture function'', followed from calling the actual ''operation function''. After success, the logging and [[undo registration|UndoManager]] is completed. In any case, finally the ''result signal'' (a functor previously stored within the command) is emitted. {{red{10/09 WIP: not clear if we indeed implement this concept}}}
 
 By design, commands are single-serving value objects; executing an operation repeatedly requires creating a collection of command objects, one for each invocation. While nothing prevents you from invoking the command operation functor several times, each invocation will overwrite the undo state captured by the previous invocation. Thus, each command instance should be seen as the promise (or later the trace) of a single operation execution. In a similar vein, the undo capturing should be defined as to be self sufficient, so that invoking just the undo functor of a single command performs any necessary steps to restore the situation found before invoking the corresponding mutation functor -- of course only //with respect to the topic covered by this command.// So, while commands provide a lot of flexibility and allow to do a multitude of things, certainly there is an intended CommandLifecycle.
 &rarr; command [[definition|CommandDefinition]] and [[-lifecycle|CommandLifecycle]]
@@ -1677,7 +1677,7 @@ If the currently active element is something within a scope, we want the new sco
 So, for the purpose of this analysis, the "add Track" action serves as an example where we need to pick up the subject of the change from context...
 * the fact there is always a timeline and a sequence, also implies there is always a fork root (track)
 * so this operation basically adds to a //"current scope"// -- or next to it, as sibling
-* this means, the UI logic has to provide a //current model element,// while the details of actually selecting a parent are decided elsewhere (in Proc-Layer, in rules)
+* this means, the UI logic has to provide a //current model element,// while the details of actually selecting a parent are decided elsewhere (in Steam-Layer, in rules)
 
@@ -1689,7 +1689,7 @@ The handling of a command starts out with a ''command ID'' provided by the clien By ''binding'' to specific operation arguments, the definition is //armed up//&nbsp; and becomes a real ''command''. This is similar to creating an instance from a class. Behind the scenes, storage is allocated to hold the argument values and any state captured to create the ability to UNDO the command's effect later on. A command is operated or executed by passing it to an ''execution pattern'' &mdash; there is a multitude of possible execution patterns to choose from, depending on the situation. -{{red{WIP... details of ~ProcDispatcher not specified yet}}} +{{red{WIP... details of ~SteamDispatcher not specified yet}}} When a command has been executed (and maybe undone), it's best to leave it alone, because the UndoManager might hold a reference. Anyway, a ''clone of the command'' could be created, maybe bound with different arguments and treated separately from the original command. @@ -1703,7 +1703,7 @@ State predicates are accessible through the Command (frontend); additionally the
//Helper facility to ease the creation of actual command definitions.//
-A [[Proc-Layer command|CommandHandling]] is a functor, which can be parametrised with actual arguments. It needs to be [[defined|CommandDefinition]] beforehand, which means to establish an unique name and to supply three functions, one for the actual command operation, one to capture state and one to [[UNDO]] the effect of the command invocation.
+A [[Steam-Layer command|CommandHandling]] is a functor, which can be parametrised with actual arguments. It needs to be [[defined|CommandDefinition]] beforehand, which means to establish an unique name and to supply three functions, one for the actual command operation, one to capture state and one to [[UNDO]] the effect of the command invocation.
 
 The helper class {{{CommandSetup}}} allows to create series of such definitions with minimal effort. Since any access and mutation from the UI into the Session data must be performed by invoking such commands, a huge amount of individual command definitions need to be written eventually. These are organised into a series of implementation translation units with location {{{poc/cmd/*-cmd.cpp}}}.
 
@@ -1766,7 +1766,7 @@ Connecting data streams of differing type involves a StreamConversion. Mostly, t
 
-
This index refers to the conceptual, more abstract and formally specified aspects of the Proc-Layer and Lumiera in general.
+
This index refers to the conceptual, more abstract and formally specified aspects of the Steam-Layer and Lumiera in general.
 More often than not, these emerge from immediate solutions, being percieved as especially expressive, when taken on, yielding guidance by themselves. Some others, [[Placements|Placement]] and [[Advice]] to mention here, immediately substantiate the original vision.
@@ -1863,21 +1863,13 @@ The fake implementation should follow the general pattern planned for the Prolog
-
-
Here, in the context of the Render Engine, the Controller component is responsible for triggering and coordinating the build process and for activating the backend and the Render Engine configuration created by the Builder to carry out the actual rendering. There is another Controller in the backend, the ~PlaybackController, which is in charge of the playback/rendering/display state of the application, and consequently will call this (Proc-Layer) Controller to get the necessary Render Engine.
-
-!Facade
-This is an very important external Interface, because it links together all three Layers of our current architecture. It can be used by the backend to initiate [[Render Processes (=StateProxy)|StateProxy]] and it will probably be used by the Dispatcher for GUI actions as well...
-
-
-
-
-
The Render Engine is the part of the application doing the actual video calculations. Relying on system level services and retrieving raw audio and video data through [[Lumiera's Vault Layer|VaultLayer]], its operations are guided by the objects and parameters edited by the user in [[the session|Session]]. The middle layer of the Lumiera architecture, known as the SteamLayer, spans the area between these two exteremes, providing the the (abstract) edit operations available to the user, the representation of [["editable things"|MObjects]] and the translation of those into structures and facilities allowing to [[drive the rendering|Rendering]].
+
+
The Render Engine is the part of the application doing the actual video calculations. Relying on system level services and retrieving raw audio and video data through [[Lumiera's Vault Layer|VaultLayer]], its operations are guided by the objects and parameters edited by the user in [[the session|Session]]. The middle layer of the Lumiera architecture, known as the Steam-Layer, spans the area between these two extremes, providing the the (abstract) edit operations available to the user, the representation of [["editable things"|MObjects]] and the translation of those into structures and facilities allowing to [[drive the rendering|Rendering]].
 
 !About this wiki page
-|background-color:#e3f3f1;width:96ex;padding:2ex; This TiddlyWiki is the central location for design, planning and documentation of the Lumiera Proc-Layer. Some parts are used as //extended brain// &mdash; collecting ideas, considerations and conclusions &mdash; while other tiddlers contain the decisions and document the planned or implemented facilities. The intention is to move over the more mature parts into the emerging technical documentation section on the [[Lumiera website|http://www.lumiera.org]] eventually. <br/><br/>Besides cross-references, content is largely organised through [[Tags|TabTags]], most notably <br/><<tag overview>> &middot; <<tag def>> &middot; <<tag decision>> &middot; <<tag Concepts>> &middot; <<tag GuiPattern>> <br/> <<tag Model>> &middot; <<tag SessionLogic>> &middot; <<tag GuiIntegration>> &middot; <<tag Builder>> &middot; <<tag Rendering>> &middot; <<tag Player>> &middot; <<tag Rules>> &middot; <<tag Types>> |
+|background-color:#e3f3f1;width:96ex;padding:2ex; This TiddlyWiki is the central location for design, planning and documentation of the Core. Some parts are used as //extended brain// &mdash; collecting ideas, considerations and conclusions &mdash; while other tiddlers contain the decisions and document the planned or implemented facilities. The intention is to move over the more mature parts into the emerging technical documentation section on the [[Lumiera website|http://www.lumiera.org]] eventually. <br/><br/>Besides cross-references, content is largely organised through [[Tags|TabTags]], most notably <br/><<tag overview>> &middot; <<tag def>> &middot; <<tag decision>> &middot; <<tag Concepts>> &middot; <<tag GuiPattern>> <br/> <<tag Model>> &middot; <<tag SessionLogic>> &middot; <<tag GuiIntegration>> &middot; <<tag Builder>> &middot; <<tag Rendering>> &middot; <<tag Player>> &middot; <<tag Rules>> &middot; <<tag Types>> |
 
-!~Proc-Layer Summary
+!~Steam-Layer Summary
 When editing, the user operates several kinds of //things,// organized as [[assets|Asset]] in the AssetManager, like media, clips, effects, codecs, configuration templates. Within the context of the [[Project or Session|Session]], we can use these as &raquo;[[Media Objects|MObjects]]&laquo; &mdash; especially, we can [[place|Placement]] them in various kinds within the session and relative to one another.
 
 Now, from any given configuration within the session, we create sort or a frozen- and tied-down snapshot, here called &raquo;[[Fixture|Fixture]]&laquo;, containing all currently active ~MObjects, broken down to elementary parts and made explicit if necessary. This Fixture acts as a isolation layer towards the Render Engine. We will hand it over to the  [[Builder]], which in turn will transform it into a network of connected [[render nodes|ProcNode]]. This network //implements//&nbsp; the [[Render Engine|OverviewRenderEngine]].
@@ -2039,7 +2031,7 @@ We ''separate'' processing (rendering) and configuration (building). The [[Build
 ''Objects are [[placed|Placement]] rather'' than assembled, connected, wired, attached. This is more of a rule-based approach and gives us one central metaphor and abstraction, allowing us to treat everything in an uniform manner. You can place it as you like, and the builder tries to make sense out of it, silently disabling what doesn't make sense.
 An [[Sequence]] is just a collection of configured and placed objects (and has no additional, fixed structure). [["Tracks" (forks)|Fork]] form a mere organisational grid, they are grouping devices not first-class entities (a track doesn't "have" a pipe or "is" a video track and the like; it can be configured to behave in such manner by using placements though). [[Pipes|Pipe]] are hooks for making connections and are the only facility to build processing chains. We have global pipes, and each clip is built around a lokal [[source port|ClipSourcePort]] &mdash; and that's all. No special "media viewer" and "arranger", no special role for media sources, no commitment to some fixed media stream types (video and audio). All of this is sort of pushed down to be configuration, represented as asset of some kind. For example, we have [[processing pattern|ProcPatt]] assets to represent the way of building the source network for reading from some media file (including codecs treated like effect plugin nodes)
 
-The model in Proc-Layer is rather an //internal model.// What is exposed globally, is a structural understanding of this model. In this structural understanding, there are Assets and ~MObjects, which both represent the flip side of the same coin: Assets relate to bookkeeping, while ~MObjects relate to building and manipulation of the model. In the actual data represntation within the HighLevelModel, we settled upon some internal reductions, preferring either the //Asset side// or the //~MObject side// to represent some relevant entities. See &rarr; AssetModelConnection.
+The model in Steam-Layer is rather an //internal model.// What is exposed globally, is a structural understanding of this model. In this structural understanding, there are Assets and ~MObjects, which both represent the flip side of the same coin: Assets relate to bookkeeping, while ~MObjects relate to building and manipulation of the model. In the actual data represntation within the HighLevelModel, we settled upon some internal reductions, preferring either the //Asset side// or the //~MObject side// to represent some relevant entities. See &rarr; AssetModelConnection.
 
 Actual ''media data and handling'' is abstracted rigorously. Media is conceived as being stream-like data of distinct StreamType. When it comes to more low-level media handling, we build on the DataFrame abstraction. Media processing isn't the focus of Lumiera; we organise the processing but otherwise ''rely on media handling libraries.'' In a similar vein, multiplicity is understood as type variation. Consequently, we don't build an audio and video "section" and we don't even have audio tracks and video tracks. Lumiera uses tracks and clips, and clips build on media, but we're able to deal with [[multichannel|MultichannelMedia]] mixed-typed media natively.
 
@@ -2055,7 +2047,7 @@ A strong emphaisis is placed on ''Separation of Concerns'' and especially on ''O
 
-
This ~Proc-Layer and ~Render-Engine implementation started out as a design-draft by [[Ichthyo|mailto:Ichthyostega@web.de]] in summer 2007. The key idea of this design-draft is to use the [[Builder Pattern|http://en.wikipedia.org/wiki/Builder_pattern]] for the Render Engine, thus separating completely the //building// of the Render Pipeline from //running,// i.e. doing the actual Render. The Nodes in this Pipeline should process Video/Audio and do nothing else. No more decisions, tests and conditional operations when running the Pipeline. Move all of this out into the configuration of the pipeline, which is done by the Builder.
+
This ~Steam-Layer and ~Render-Engine implementation started out as a design-draft by [[Ichthyo|mailto:Ichthyostega@web.de]] in summer 2007. The key idea of this design-draft is to use the [[Builder Pattern|http://en.wikipedia.org/wiki/Builder_pattern]] for the Render Engine, thus separating completely the //building// of the Render Pipeline from //running,// i.e. doing the actual Render. The Nodes in this Pipeline should process Video/Audio and do nothing else. No more decisions, tests and conditional operations when running the Pipeline. Move all of this out into the configuration of the pipeline, which is done by the Builder.
 
 !Why doesn't the current Cinelerra-2 Design succeed?
 The design of Cinelerra-2 basically follows a similar design, but [[fails because of two reasons...|https://lumiera.org/project/background/history/CinelerraWoes.html]]
@@ -2083,7 +2075,7 @@ Another pertinent theme is to make the basic building blocks simpler, while on t
 
 !Starting point
 The intention is to start out with the design of the PlayerDummy and to //transform//&nbsp; it into the full player subsystem.
-* the ~DisplayService in that dummy player design moves down into Proc-Layer and becomes the OutputManager
+* the ~DisplayService in that dummy player design moves down into Steam-Layer and becomes the OutputManager
 * likewise, the ~DisplayerSlot is transformed into the interface OutputSlot, with various implementations to be registered with the OutputManager
 * the core idea of having a play controler act as the frontend and handle to an PlayProcess is retained.
 
@@ -2555,7 +2547,7 @@ Additionally, they may be used for resource management purposes by embedding a r
 
An entity within the RenderEngine, responsible for translating a logical [[calculation stream|CalcStream]] (corresponding to a PlayProcess) into a sequence of individual RenderJob entries, which can then be handed over to the [[Scheduler]]. Performing this operation involves a special application of [[time quantisation|TimeQuant]]: after establishing a suitable starting point, a typically contiguous series of frame numbers need to be generated, together with the time coordinates for each of those frames.
 
-The dispatcher works together with the job ticket(s) and the scheduler; actually these are the //core abstractions//&nbsp; the process of ''job planning'' relies on. While the actual scheduler implementation lives within the backend, the job tickets and the dispatcher are located within the [[Segmentation]], which is the backbone of the [[low-level model|LowLevelModel]]. More specifically, the dispatcher interface is //implemented//&nbsp; by a set of &rarr; [[dispatcher tables|DispatcherTables]] within the segmentation.
+The dispatcher works together with the job ticket(s) and the scheduler; actually these are the //core abstractions//&nbsp; the process of ''job planning'' relies on. While the actual scheduler implementation lives within the Vault, the job tickets and the dispatcher are located within the [[Segmentation]], which is the backbone of the [[low-level model|LowLevelModel]]. More specifically, the dispatcher interface is //implemented//&nbsp; by a set of &rarr; [[dispatcher tables|DispatcherTables]] within the segmentation.
 
 {{red{stalled since 2014}}} -- development on this (important) topic has been postponed. Moreover, some rough edges remain within the Design &rarr; see [[some notes...|AboutMonads]]
 
@@ -2596,7 +2588,7 @@ Used within Lumiera as a foundation for working with raw video and audio media d
 
-
__G__roup __of__ __P__ictures: several compressed video formats don't encode single frames. Normally, such formats are considered mere //delivery formates// but it was one of the key strenghts of Cinelrra from start to be able to do real non linear editing on such formats (like the ~MPEG2-ts unsed in HDV video). The problem of course is that the data backend needs to decode the whole GOP to be serve  single raw video frames.
+
__G__roup __of__ __P__ictures: several compressed video formats don't encode single frames. Normally, such formats are considered mere //delivery formates// but it was one of the key strenghts of Cinelrra from start to be able to do real non linear editing on such formats (like the ~MPEG2-ts unsed in HDV video). The problem of course is that the data Vault needs to decode the whole GOP to be serve  single raw video frames.
 
 For this Lumiera design, we could consider making GOP just another raw media data frame type and integrate this decoding into the render pipeline, similar to an effect based on several source frames for every calculated output frame.
 
@@ -2649,7 +2641,7 @@ In the typical standard configuration, there is (at least) a video master and a
 
//This page serves to shape and document the design of the global pipes//
-Many aspects regarding the global pipes turned out while clarifying other parts of ~Proc-Layer's design. For some time it wasn't even clear if we'd need global pipes -- common video editing applications get on without. Mostly it was due to the usefulness of the layout found on sound mixing desks, and a vague notion to separate time-dependant from global parts, which finally led me to favouring such a global facility. This decision then helped in separating the concerns of timeline and sequence, making the //former// a collection of non-temporal entities, while the latter concentrates on time varying aspects.
+Many aspects regarding the global pipes turned out while clarifying other parts of ~Steam-Layer's design. For some time it wasn't even clear if we'd need global pipes -- common video editing applications get on without. Mostly it was due to the usefulness of the layout found on sound mixing desks, and a vague notion to separate time-dependant from global parts, which finally led me to favouring such a global facility. This decision then helped in separating the concerns of timeline and sequence, making the //former// a collection of non-temporal entities, while the latter concentrates on time varying aspects.
 
 !Design problem with global Pipes
 actually building up the implementation of global pipes seems to pose a rather subtle design problem: it is difficult to determine how to do it //right.//
@@ -2717,7 +2709,7 @@ we need a test setup for this investigation.
 * realistic: shall reflect the situation in our actual UI
 As starting point, {{red{in winter 2016/17}}} the old (broken) timeline panel was moved aside and a new panel was attached for GTK experiments. These basically confirmed the envisioned approach; it is possible to place widgets freely onto the canvas; they are drawn in insert order, which allows for overlapped widgets (and mouse events are dispatched properly, as you'd expect). Moreover, it is also possible to draw directly onto the canvas, by overriding the {{{on_draw()}}}-function. However, some (rather trivial) adjustments need to be done to get a virtual canvas, which moves along with the placed widgets. That is, GtkLayoutWidget handles scrolling of embedded widgets automatically, but you need to adjust the Cairo drawing context manually to move along. The aforementioned experimental code shows how.
 
-After that initial experiments, my focus shifted to the still unsatisfactory top-level UI structure, and I am working towards an initial integration with Proc-Layer since then.
+After that initial experiments, my focus shifted to the still unsatisfactory top-level UI structure, and I am working towards an initial integration with Steam-Layer since then.
 
@@ -2833,9 +2825,9 @@ this->invoke (cmd::scope_addClip, scope(HERE), element(RECENT))
-
The topic of command binding addresses the way to access, parametrise and issue [[»Proc-Layer Commands«|CommandHandling]] from within the UI structures.
+
The topic of command binding addresses the way to access, parametrise and issue [[»Steam-Layer Commands«|CommandHandling]] from within the UI structures.
 Basically, commands are addressed by-name -- yet the fact that there is a huge number of commands, which moreover need to be provided with actual arguments, which are to be picked up from some kind of //current context// -- this all together turns this seemingly simply function invocation into a challenging task.
-The organisation of the Lumiera UI calls for a separation between immediate low-level UI element reactions, and anything related to the user's actions when working with the elements in the [[Session]] or project. The immediate low-level UI mechanics is implemented directly within the widget code, whereas to //"work on elements in the session",// we'd need a collaboration spanning UI-Layer and Proc-Layer. Reactions within the UI mechanics (like e.g. dragging a clip) need to be interconnected and translated into "sentences of operation", which can be sent in the form of a fully parametrised command instance towards the ProcDispatcher
+The organisation of the Lumiera UI calls for a separation between immediate low-level UI element reactions, and anything related to the user's actions when working with the elements in the [[Session]] or project. The immediate low-level UI mechanics is implemented directly within the widget code, whereas to //"work on elements in the session",// we'd need a collaboration spanning UI-Layer and Steam-Layer. Reactions within the UI mechanics (like e.g. dragging a clip) need to be interconnected and translated into "sentences of operation", which can be sent in the form of a fully parametrised command instance towards the SteamDispatcher
 * questions of architecture related to command binding &rarr; GuiCommandBindingConcept
 * study of pivotal action invocation situations &rarr; CommandInvocationAnalysis
 * actual design of command invocation in the UI &rarr; GuiCommandCycle
@@ -2850,7 +2842,7 @@ The organisation of the Lumiera UI calls for a separation between immediate low-
 
 !prerequisites for issuing a command
 Within the Lumiera architecture, with the very distinct separation between [[Session]] and interface view, several steps have to be met before we're able to operate on the model.
-* we need a pre-written script, which directly works on the entities reachable through the session interface &rarr; [[Command handling in Proc-Layer|CommandHandling]]
+* we need a pre-written script, which directly works on the entities reachable through the session interface &rarr; [[Command handling in Steam-Layer|CommandHandling]]
 * we need to complement this script with a state capturing script and a script to undo the given action
 * we need to combine these fixed snippets into a //command prototype.//
 * we need to care for the supply of parameters
@@ -2889,12 +2881,12 @@ This contrastive approach attempts to keep knowledge and definition clustered in
 Within the Lumiera UI, we distinguish between core concerns and the //local mechanics of the UI.// The latter is addressed in the usual way, based on a variation of the [[MVC-Pattern|http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller]]. The UI toolkit set, here the GTK, affords ample ways to express actions and reactions within this framework, where widgets in the presentation view are wired with the corresponding controllers vice versa (GTK terms these connections as //"signals"//, we rely on {{{libSigC++}}} for implementation).
 A naive approach would extend these mature mechanisms to also cover the actual functionality of the application. This compelling solution allows quickly to get "something tangible" up and running, yet -- on the long run -- inevitably leads to core concerns being tangled into the presentation layer, which in turn becomes hard to maintain and loaded with "code behind". Since we are here "for the long run", we immediately draw the distinction between UI mechanics and core concerns. The latter are, by decree and axiom, required to perform without even an UI layer running. This decision gives rise to the challenge how to form and integrate the invocation of ''core commands'' into the presentation layer.
 
-In a nutshell, we understand each such core command as a ''sentence'', with a //subject//, a //predication//, which is the command script in ~Proc-Layer and can be represented by an ID, and possibly additional arguments. And the key point to note is: //such an action sentence needs to be formed, before it can be issued.//
+In a nutshell, we understand each such core command as a ''sentence'', with a //subject//, a //predication//, which is the command script in ~Steam-Layer and can be represented by an ID, and possibly additional arguments. And the key point to note is: //such an action sentence needs to be formed, before it can be issued.//
 [>img[Command usage in UI|uml/Command-ui-usage.png]]
 
 !use case analysis
 To gain some understanding of the topic, we pose the question "who has to deal with core commands"?
-* the developer of ~Proc-Layer, obviously. The result of that development process is a set of [[command definitions|CommandHandling]], which get installed during start-up of the SessionSubsystem
+* the developer of ~Steam-Layer, obviously. The result of that development process is a set of [[command definitions|CommandHandling]], which get installed during start-up of the SessionSubsystem
 * global menu actions (and keybindings) want to issue a specific command, but possibly also need context information
 * a widget, button or context-menu binding typically want to trigger a command on some [[tangible element|UI-Element]] (widget or controller), but also needs to prepare the arguments prior to invocation
 * some [[interaction state manager|InteractionState]] observes contextual change and needs to mark possible consequences for invoking a given command
@@ -2920,7 +2912,7 @@ from these use cases, we can derive the //crucial activities for command handlin
 *:a known command is triggered with likewise known arguments
 *:* just the global ~Command-ID (ID of the prototype) is sent over the UI-Bus, together with the arguments
 *:* the {{{CmdInstanceManager}}} in Proc creates an anonymous clone copy instance from the prototype
-*:* arguments are bound and the instance is handed over into the ProcDispatcher, without any further registration
+*:* arguments are bound and the instance is handed over into the SteamDispatcher, without any further registration
 *;context bound
 *:invocation of a command is formed within a context, typically through a //interaction gesture.//
 *:most, if not all arguments of the command are picked up from the context, based on the current [[Spot]]
@@ -2931,12 +2923,12 @@ from these use cases, we can derive the //crucial activities for command handlin
 *:* ~UI-Elements use the services of {{{CmdContext}}} to act as observer of state changes &rarr; GuiCommandAccess
 *:* when a command is completely parametrised, it can be invoked. The managing {{{InteractionState}}} knows about this
 *:* on invocation, the ID of the instance and the fully resolved arguments are sent via UI-Bus to the {{{CmdInstanceManager}}}
-*:* which in turn removes the instance handle from its registration table and hands it over into the ProcDispatcher
+*:* which in turn removes the instance handle from its registration table and hands it over into the SteamDispatcher
 * in any case, only the {{{CmdInstanceManager}}} need to know about this actual command instance; there is no global registration
 [<img[Access to Session Commands from UI|uml/Command-ui-access.png]]
 An immediate consequence is that command instances may be formed //per instance// of InteractionState. Each distinct kind of control system has its own instances, which are kept around, until they are ready for invocation. Each invocation "burns" an instance -- on next access, a new instance ID will be allocated, and the next command invocation cycle starts...
 
-Command instances are like prototypes -- thus each additional level of differentiation will create a clone copy and decorate the basic command ID. This is necessary, since the same command may be used and parametrised differently at various places in the UI. If necessary, the {{{CmdInstanceManager}}} internally maintains and tracks a prepared anonymous command instance within a local registration table. The //smart-handle//-nature of command instance is enough to keep concurrently existing instances apart; instances might be around for an extended period, because commands are enqueued with the ProcDispatcher.
+Command instances are like prototypes -- thus each additional level of differentiation will create a clone copy and decorate the basic command ID. This is necessary, since the same command may be used and parametrised differently at various places in the UI. If necessary, the {{{CmdInstanceManager}}} internally maintains and tracks a prepared anonymous command instance within a local registration table. The //smart-handle//-nature of command instance is enough to keep concurrently existing instances apart; instances might be around for an extended period, because commands are enqueued with the SteamDispatcher.
 
 ''command definition'':
 &rarr; Command scripts are defined in translation units in {{{proc/cmd}}}
@@ -2971,7 +2963,7 @@ The global access point to component views is the ViewLocator within Interaction
 * destroy a specific view
 For all these direct access operations, elements are designated by a private name-ID, which is actually more like a type-~IDs, and just serves to distinguish the element from its siblings. The same ~IDs are used for the components in [[UI coordinate specifications|UICoord]]; both usages are closely interconnected, because view access is accomplished by forming an UI coordinate path to the element, which is then in turn used to navigate the internal UI widget structure to reach out for the actual implementation element.
 
-While these aforementioned access operations expose a strictly typed direct reference to the respective view component and thus allow to //manage them like child objects,// in many cases we are more interested in UI elements representing tangible elements from the session. In those cases,  it is sufficient to address the desired component view just via the UI-Bus. This is possible since component ~IDs of such globally relevant elements are formed systematically and thus always predictable: it is the same ID as used within Proc-Layer, which basically is an {{{EntryID<TYPE>}}}, where {{{TYPE}}} denotes the corresponding model type in the [[Session model|HighLevelModel]].
+While these aforementioned access operations expose a strictly typed direct reference to the respective view component and thus allow to //manage them like child objects,// in many cases we are more interested in UI elements representing tangible elements from the session. In those cases,  it is sufficient to address the desired component view just via the UI-Bus. This is possible since component ~IDs of such globally relevant elements are formed systematically and thus always predictable: it is the same ID as used within Steam-Layer, which basically is an {{{EntryID<TYPE>}}}, where {{{TYPE}}} denotes the corresponding model type in the [[Session model|HighLevelModel]].
 
 !!!Configuration of view allocation
 Since view allocation offers a choice amongst several complex patterns of behaviour, it seems adequate to offer at least some central configuration site with a DSL for readability. That being said -- it is conceivable that we'll have to open this topic altogether for general configuration by the user. For this reason, the configuration site and DSL are designed in a way to foster further evolution of possibilites...
@@ -3082,7 +3074,7 @@ While the process of probing and matching the location specification finally yie
 
-
All communication between Proc-Layer and GUI has to be routed through the respective LayerSeparationInterfaces. Following a fundamental design decision within Lumiera, these interface are //intended to be language agnostic// &mdash; forcing them to stick to the least common denominator. Which creates the additional problem of how to create a smooth integration without forcing the architecture into functional decomposition style. To solve this problem, we rely on ''messaging'' rather than on a //business facade// -- our facade interfaces are rather narrow and limited to lifecycle management. In addition, the UI exposes a [[notification facade|GuiNotificationFacade]] for pushing back status information created as result of the edit operations, the build process and the render tasks.
+
All communication between Steam-Layer and GUI has to be routed through the respective LayerSeparationInterfaces. Following a fundamental design decision within Lumiera, these interface are //intended to be language agnostic// &mdash; forcing them to stick to the least common denominator. Which creates the additional problem of how to create a smooth integration without forcing the architecture into functional decomposition style. To solve this problem, we rely on ''messaging'' rather than on a //business facade// -- our facade interfaces are rather narrow and limited to lifecycle management. In addition, the UI exposes a [[notification facade|GuiNotificationFacade]] for pushing back status information created as result of the edit operations, the build process and the render tasks.
 
 !anatomy of the Proc/GUI interface
 * the GuiFacade is used as a general lifecycle facade to start up the GUI and to set up the LayerSeparationInterfaces.<br/>It is implemented by a class //in core// and loads the Lumiera ~GTK-UI as a plug-in.
@@ -3101,12 +3093,12 @@ Rather, the core sends ''diff messages'' up to the UI, indicating how it sees th
 
//install a listener into the session to cause sending of population diff messages.//
 The Lumiera application is not structured as an UI with internal functionality dangling below. Rather, the architecture calls for several self-contained subsystems, where all of the actual "application content" is modelled within the [[Session]] subsystem. The UI implements the //mechanics of user interaction// -- yet as far as content is concerned, it plays a passive role. Within the UI-Layer, there is a hierarchy of presentation entities, to mirror the structure of the session contents. These entities are built step by step, in reception of //population diff messages// sent upwards from the session.
 
-To establish this interaction pattern, a listener gets installed into the session. In fact, the UI just issues a specific command to indicate it is ready for receiving content; this command, when executed down within the ProcDispatcher, causes the setup of aforementioned listener. This listener first has to //catch up// with all content already existing within the session -- it still needs to be defined {{red{as of 7/2018}}} in which form this existing content can be translated into a diff to reflect it within the UI. Since the session itself is {{red{planned (as of 2018)}}} to be persisted as a sequence of building instructions (CQRS, event sourcing), it might be possible just to replay the appropriate defining messages from the global log. After that point, the UI is attached and synchronised with the session contents and receives any altering messages in the order they emerge.
+To establish this interaction pattern, a listener gets installed into the session. In fact, the UI just issues a specific command to indicate it is ready for receiving content; this command, when executed down within the SteamDispatcher, causes the setup of aforementioned listener. This listener first has to //catch up// with all content already existing within the session -- it still needs to be defined {{red{as of 7/2018}}} in which form this existing content can be translated into a diff to reflect it within the UI. Since the session itself is {{red{planned (as of 2018)}}} to be persisted as a sequence of building instructions (CQRS, event sourcing), it might be possible just to replay the appropriate defining messages from the global log. After that point, the UI is attached and synchronised with the session contents and receives any altering messages in the order they emerge.
 
 !Trigger
 It is clear that content population can commence only when the GTK event loop is already running and the application frame is visible and active. For starters, this sequence avoids all kinds of nasty race conditions. And, in addition, it ensures a reactive UI; if populating content takes some time, the user may watch this process through the visible clues given just by changing the window contents and layout in live state.
 
-And since we are talking about a generic facility, the framework of content population has to be established in the GuiTopLevel. Now, the top-level in turn //starts the event loop// -- thus we need to //schedule// the trigger for content population. The existing mechanisms are not of much help here, since in our case we //really need a fully operative application// once the results start bubbling up from Proc-Layer. The {{{Gio::Application}}} offers an "activation signal" -- yet in fact this is only necessary due to the internals of {{{Gio::Application}}}, with all this ~D-Bus registration stuff. Just showing a GTK window widget in itself does not require a running event loop (although one does not make much sense without the other). The mentioned {{{signal_activation()}}} is emitted from {{{g_application_run()}}} (actually the invocation of {{{g_application_activate()}}} is burried within {{{g_application_real_local_command_line()}}}, which means, the activation happens //immediately before// entering the event loop. Which pretty much rules out this approach in our case, since Lumiera doesn't use a {{{Gtk::Application}}}, and moreover the signal would still induce the (small) possibility of a race between the actual opening of the GuiNotificationFacade and the start of content population from the [[Proc-Layer thread|SessionSubsystem]].
+And since we are talking about a generic facility, the framework of content population has to be established in the GuiTopLevel. Now, the top-level in turn //starts the event loop// -- thus we need to //schedule// the trigger for content population. The existing mechanisms are not of much help here, since in our case we //really need a fully operative application// once the results start bubbling up from Steam-Layer. The {{{Gio::Application}}} offers an "activation signal" -- yet in fact this is only necessary due to the internals of {{{Gio::Application}}}, with all this ~D-Bus registration stuff. Just showing a GTK window widget in itself does not require a running event loop (although one does not make much sense without the other). The mentioned {{{signal_activation()}}} is emitted from {{{g_application_run()}}} (actually the invocation of {{{g_application_activate()}}} is burried within {{{g_application_real_local_command_line()}}}, which means, the activation happens //immediately before// entering the event loop. Which pretty much rules out this approach in our case, since Lumiera doesn't use a {{{Gtk::Application}}}, and moreover the signal would still induce the (small) possibility of a race between the actual opening of the GuiNotificationFacade and the start of content population from the [[Steam-Layer thread|SessionSubsystem]].
 
 The general plan to trigger content population thus boils down to
 * have the InteractionDirector inject the population trigger with the help of {{{Glib::signal_timeout()}}}
@@ -3321,9 +3313,9 @@ In a preliminary attempt to establish an integration between the GUI and the low
 
Building a layered architecture is a challenge, since the lower layer //really// needs to be self-contained, while prepared for usage by the higher layer.
 A major fraction of all desktop applications is written in a way where operational logic is built around the invocation from UI events -- what should be a shell turns into a backbone. One possible way to escape from this common anti pattern is to introduce a mediating entity, to translate between two partially incompatible demands and concerns: Sure, the "tangible stuff" is what matters, but you can not build any significant piece of technology if all you want is to "serve" the user.
 
-Within the Lumiera GTK UI, we use a proxying model as a mediating entity. It is based upon the ''generic aspect'' of the SessionInterface, but packaged and conditioned in a way to allow a direct mapping of GUI entities on top. The widgets in the UI can be conceived as decorating this model. Callbacks can be wired back, so to transform user interface events into a stream of commands for the Proc-Layer sitting below.
+Within the Lumiera GTK UI, we use a proxying model as a mediating entity. It is based upon the ''generic aspect'' of the SessionInterface, but packaged and conditioned in a way to allow a direct mapping of GUI entities on top. The widgets in the UI can be conceived as decorating this model. Callbacks can be wired back, so to transform user interface events into a stream of commands for the Steam-Layer sitting below.
 
-The GUI model is largely comprised of immutable ID elements, which can be treated as values. A mutated model configuration in Proc-Layer is pushed upwards as a new structure and translated into a ''diff'' against the previous structure -- ready to be consumed by the GUI widgets; this diff can be broken down into parts and consumed recursively -- leaving it to the leaf widgets to adapt themselves to reflect the new situation.
+The GUI model is largely comprised of immutable ID elements, which can be treated as values. A mutated model configuration in Steam-Layer is pushed upwards as a new structure and translated into a ''diff'' against the previous structure -- ready to be consumed by the GUI widgets; this diff can be broken down into parts and consumed recursively -- leaving it to the leaf widgets to adapt themselves to reflect the new situation.
 &rarr; [[Building blocks of the GUI model|GuiModelElements]]
 &rarr; [[GUI update mechanics|GuiModelUpdate]]
 
@@ -3338,7 +3330,7 @@ A fundamental decision within the Lumiera UI is to build every model-like struct
 * or a whole subtree of elements is built up step wise in response to a ''population diff''. This is an systematic description of a complete sub-structure in current shape, and is produced as emanation from a DiffConstituent.
 
 !synchronisation guarantees
-We acknowledge that the gui model is typically used from within the GUI event dispatch thread. This is //not// the thread where any session state is mutated. Thus it is the responsibility of this proxying model within the GUI to ensure that the retrieved structure is a coherent snapshot of the session state. Especially the {{{gui::model::SessionFacade}}} ensures that there was a read barrier between the state retrieval and any preceding mutation command. Actually, this is implemented down in Proc-Layer, with the help of the ProcDispatcher.
+We acknowledge that the gui model is typically used from within the GUI event dispatch thread. This is //not// the thread where any session state is mutated. Thus it is the responsibility of this proxying model within the GUI to ensure that the retrieved structure is a coherent snapshot of the session state. Especially the {{{gui::model::SessionFacade}}} ensures that there was a read barrier between the state retrieval and any preceding mutation command. Actually, this is implemented down in Steam-Layer, with the help of the SteamDispatcher.
 
 The forwarding of model changes to the GUI widgets is another concern, since notifications from session mutations arrive asynchronous after each [[Builder]] run. In this case, we send a notification to the widgets registered as listeners, but wait for //them// to call back and fetch the [[diffed state|TreeDiffModel]]. The notification will be dispatched into the GUI event thread (by the {{{GuiNotification}}} façade), which implies that also the callback embedded within the notification will be invoked by the  widgets to perform within the GUI  thread.
 
@@ -3368,7 +3360,7 @@ The fundamental pattern for building graphical user interfaces is to segregate i
 
 
 The Lumiera GTK UI is built around a distinct backbone, separate from the structures required and provided by GTK.
-While GTK -- especially in the object oriented incantation given by Gtkmm -- hooks up a hierarchy of widgets into a UI workspace, each of these widgets can and should incorporate the necessary control and data elements. But actually, these elements are local access points to our backbone structure, which we define as the UI-Bus. So, in fact, the local widgets and controllers wired into the interface are turned into ''Decorators'' of a backbone structure. This backbone is a ''messaging system'' (hence the name "Bus"). The terminal points of this messaging system allow for direct wiring of GTK signals. Operations triggered by UI interactions are transformed into [[Command]] invocations into the Proc-Layer, while the model data elements remain abstract and generic. The entities in our UI model are not directly connected to the actual model, but they are in correspondence to such actual model elements within the [[Session]]. Moreover, there is an uniform [[identification scheme|GenNode]].
+While GTK -- especially in the object oriented incantation given by Gtkmm -- hooks up a hierarchy of widgets into a UI workspace, each of these widgets can and should incorporate the necessary control and data elements. But actually, these elements are local access points to our backbone structure, which we define as the UI-Bus. So, in fact, the local widgets and controllers wired into the interface are turned into ''Decorators'' of a backbone structure. This backbone is a ''messaging system'' (hence the name "Bus"). The terminal points of this messaging system allow for direct wiring of GTK signals. Operations triggered by UI interactions are transformed into [[Command]] invocations into the Steam-Layer, while the model data elements remain abstract and generic. The entities in our UI model are not directly connected to the actual model, but they are in correspondence to such actual model elements within the [[Session]]. Moreover, there is an uniform [[identification scheme|GenNode]].
 
 ;connections
 :all connections are defined to be strictly //optional.//
@@ -3383,13 +3375,13 @@ While GTK -- especially in the object oriented incantation given by Gtkmm -- hoo
 
 
 !building and updating the tree
-The workspace starts out with a single element, corresponding to the »model root« in the ~Proc-Layer HighLevelModel. Initially, or on notification, an [[interface element|UI-Element]] requests a //status update// -- which conceptually implies there is some kind of conversation state. The backbone, as represented by the UI-Bus, might be aware of the knowledge state of its clients and just send an incremental update. Yet the authority or the backbone is absolute. It might, at its own discretion, send a full state update, to which the client elements are expected to comply. The status and update information is exposed in the form of a diff iterator. The client element, which can be a widget or a controller within the workspace, is expected to pull and extract this diff information, adjusting itself and destroying or creating children as applicable. This implies a recursive tree visitation, passing down the diff iterator alongside.
+The workspace starts out with a single element, corresponding to the »model root« in the ~Steam-Layer HighLevelModel. Initially, or on notification, an [[interface element|UI-Element]] requests a //status update// -- which conceptually implies there is some kind of conversation state. The backbone, as represented by the UI-Bus, might be aware of the knowledge state of its clients and just send an incremental update. Yet the authority or the backbone is absolute. It might, at its own discretion, send a full state update, to which the client elements are expected to comply. The status and update information is exposed in the form of a diff iterator. The client element, which can be a widget or a controller within the workspace, is expected to pull and extract this diff information, adjusting itself and destroying or creating children as applicable. This implies a recursive tree visitation, passing down the diff iterator alongside.
 
 Speaking of implementation, this state and update mechanics relies on two crucial provisions: Lumiera's framework for [[tree diff representation|TreeDiffModel]] and the ExternalTreeDescription, which is an abstracted, ~DOM-like rendering of the relevant parts of the session; this model tree is comprised of [[generic node elements|GenNode]] acting as proxy for [[calls into|SessionInterface]] the [[session model|HighLevelModel]] proper.
 
-
Considerations regarding the [[structure of custom timeline widgets|GuiTimelineWidgetStructure]] highlight again the necessity of a clean separation of concerns and an "open closed design". For the purpose of updating the timeline(s) to reflect the HighLevelModel in Proc-Layer, several requirements can be identified
+
Considerations regarding the [[structure of custom timeline widgets|GuiTimelineWidgetStructure]] highlight again the necessity of a clean separation of concerns and an "open closed design". For the purpose of updating the timeline(s) to reflect the HighLevelModel in Steam-Layer, several requirements can be identified
 * we need incremental updates: we must not start redrawing each and everything on each tiny change
 * we need recursive programming, since this is the only sane way to deal with tree like nested structures.
 * we need specifically typed contexts, driven by the type demands on the consumer side. What doesn't make sense at a given scope needs to be silently ignored
@@ -3415,13 +3407,13 @@ Hereby we introduce a new in-layer abstraction: The UI-Bus.
 
 
 !initiating model updates
-Model updates are always pushed up from Proc-Layer, coordinated by the ProcDispatcher. A model update can be requested by the GUI -- but the actual update will arrive asynchronously. The update information originate from within the [[build process|BuildFixture]]. {{red{TODO 10/2014 clarify the specifics}}}. When updates arrive, a ''diff is generated'' against the current GuiModel contents. The GuiModel is updated to reflect the differences and then notifications for any Receivers or Listeners are scheduled into the GUI event thread. On reception, it is their responsibility in turn to pull the targeted diff. When performing this update, the Listener thus actively retrieves and pulls the diffed information from within the GUI event thread. The GuiModel's object monitor is sufficient to coordinate this handover.
+Model updates are always pushed up from Steam-Layer, coordinated by the SteamDispatcher. A model update can be requested by the GUI -- but the actual update will arrive asynchronously. The update information originate from within the [[build process|BuildFixture]]. {{red{TODO 10/2014 clarify the specifics}}}. When updates arrive, a ''diff is generated'' against the current GuiModel contents. The GuiModel is updated to reflect the differences and then notifications for any Receivers or Listeners are scheduled into the GUI event thread. On reception, it is their responsibility in turn to pull the targeted diff. When performing this update, the Listener thus actively retrieves and pulls the diffed information from within the GUI event thread. The GuiModel's object monitor is sufficient to coordinate this handover.
 &rarr; representation of changes as a [[tree of diffs|TreeDiffModel]]
 &rarr; properties and behaviour of [[generic interface elements|UI-Element]]
 
 !!!timing and layering intricacies
 A relevant question to be settled is as to where the core of each change is constituted. This is relevant due to the intricacies of multithreading: Since the change originates in the build process, but the effect of the change is //pulled// later from within the GUI event thread, it might well happen that at this point, meanwhile further changes entered the model. As such, this is not problematic, as long as taking the diff remains atomic. This leads to quite different solution approaches:
-* we might, at the moment of performing the update, acquire a lock from the ProcDispatcher. The update process may then effectively query down into the session datastructure proper, even through the proxy of a diffing process. The obvious downside is that GUI response might block waiting on an extended operation in Proc, especially when a new build process was started meanwhile. A remedy might be to abort the update in such cases, since its effects will be obsoleted by the build process anyway.
+* we might, at the moment of performing the update, acquire a lock from the SteamDispatcher. The update process may then effectively query down into the session datastructure proper, even through the proxy of a diffing process. The obvious downside is that GUI response might block waiting on an extended operation in Proc, especially when a new build process was started meanwhile. A remedy might be to abort the update in such cases, since its effects will be obsoleted by the build process anyway.
 * alternatively, we might incorporate a complete snapshot of all information relevant for the GUI into the GuiModel. Update messages from Proc must be complete and self contained in this case, since our goal is to avoid callbacks. Following this scheme, the first stage of any update would be a push from Proc to the GuiModel, followed by a callback pull from within the individual widgets receiving the notification later. This is the approach we choose for the Lumiera GUI.
 
 !!!information to represent and to derive
@@ -3510,7 +3502,7 @@ Regarding the internal organisation of Lumiera's ~UI-Layer, there is a [[top lev
 This top-level circle is established starting from the UI-Bus (''Nexus'') and the ''UI Manager'', which in turn creates the other dedicated control entities, especially the InteractionDirector. All these build-up steps are triggered right from the UI main() function, right before starting the ''UI event loop''. The remainder of the start-up process is driven by //contextual state,// as discovered by the top-level entities, delegating to the controllers and widgets.
 
 !!!reaching the operative state
-The UI is basically in operative state when the GTK event loop is running. Before this happens, the initial //workspace window// is explicitly created and made visible -- showing an empty workspace frame without content and detail views. However, from that point on, any user interaction with and UI control currently available is guaranteed to yield the desired effect, which is typically to issue and enqueue a command into the ProcDispatcher, or to show/hide some other UI element. Which also means that all backbone components of the UI have to be created and wired prior to entering operative state. This is ensured through the construction of the {{{UIManager}}}, which holds all the relevant core components either as directly managed "~PImpl" members, or as references. The GTK UI event loop is activated through a blocking call of {{{UIManager::performMainLoop()}}}, which also happens to open all external façade interfaces of the UI-Layer. In a similar vein, the //shutdown of the UI// can be effected through the call {{{UIManager::terminateUI()}}}, causing the GTK loop to terminate, and so the UI thread will leave the aforementioned {{{performMainLoop()}}} and commence to destroy the {{{UIManager}}}, which causes disposal of all core UI components.
+The UI is basically in operative state when the GTK event loop is running. Before this happens, the initial //workspace window// is explicitly created and made visible -- showing an empty workspace frame without content and detail views. However, from that point on, any user interaction with and UI control currently available is guaranteed to yield the desired effect, which is typically to issue and enqueue a command into the SteamDispatcher, or to show/hide some other UI element. Which also means that all backbone components of the UI have to be created and wired prior to entering operative state. This is ensured through the construction of the {{{UIManager}}}, which holds all the relevant core components either as directly managed "~PImpl" members, or as references. The GTK UI event loop is activated through a blocking call of {{{UIManager::performMainLoop()}}}, which also happens to open all external façade interfaces of the UI-Layer. In a similar vein, the //shutdown of the UI// can be effected through the call {{{UIManager::terminateUI()}}}, causing the GTK loop to terminate, and so the UI thread will leave the aforementioned {{{performMainLoop()}}} and commence to destroy the {{{UIManager}}}, which causes disposal of all core UI components.
 
 !content population
 In accordance with the Lumiera application architecture in general, the UI is not allowed to open and build its visible parts on its own behalf. Content and structure is defined by the [[Session]] while the UI takes on a passive role to receive and reflect the session's content. This is accomplished by issuing a //content request,// which in turn installs a listener within the session. This listener in turn causes a //population diff// to be sent upwards into the UI-Layer. Only in response to these content messages the UI will build and activate the visible structures for user interaction.
@@ -3755,7 +3747,7 @@ Finally, this example shows an ''automation'' data set controlling some paramete
 
!Observations, Ideas, Proposals
-''this page is a scrapbook for collecting ideas'' &mdash; please don't take anything noted here too literal. While writing code, I observe that I (ichthyo) follow certain informal guidelines, some of which I'd like to note down because they could evolve into general style guidelines for the Proc-Layer code.
+''this page is a scrapbook for collecting ideas'' &mdash; please don't take anything noted here too literal. While writing code, I observe that I (ichthyo) follow certain informal guidelines, some of which I'd like to note down because they could evolve into general style guidelines for the Steam-Layer code.
 * ''Inversion of Control'' is the leading design principle.
 * but deliberately we stay just below the level of using Dependency Injection. Singletons and call-by-name are good enough. We're going to build //one// application, not any conceivable application.
 * write error handling code only if the error situation can be actually //handled// at this place. Otherwise, be prepared for exceptions just passing by and thus handle any resources by "resource acquisition is initialisation" (RAII). Remember: error handling defeats decoupling and encapsulation.
@@ -4087,13 +4079,13 @@ From experiences with other middle scale projects, I prefer having the test code
 ** nested namespace controller
 ** nested namespace builder
 ** nested namespace session
-* Subdir src/proc/engine (namespace engine) uses directly the (local) interface components StateProxy and ParamProvider; triggering of the render process is initiated by the controller and can be requested via the controller facade. Normally, the playback/render controller located in the backend will just use this interface and won't be aware of the build process at all.
+* Subdir src/proc/engine (namespace engine) uses directly the (local) interface components StateProxy and ParamProvider; triggering of the render process is initiated by the controller and can be requested via the controller facade. Normally, the playback/render controller located in the Vault will just use this interface and won't be aware of the build process at all.
 
 [img[Example: Interfaces/Namespaces of the ~Session-Subsystems|uml/fig130053.png]]
 
-
//one specific way to prepare and issue a ~Proc-Layer-Command from the UI.//
+
//one specific way to prepare and issue a ~Steam-Layer-Command from the UI.//
 The actual persistent operations on the session model are defined through DSL scripts acting on the session interface, and configured as a //command prototype.// Typically these need to be enriched with at least the actual subject to invoke this command on; many commands require additional parameters, e.g. some time or colour value. These actual invocation parameters need to be picked up from UI elements, sometimes even from the context of the triggering event. When all arguments are known, finally the command -- as identified by a command-ID -- can be issued on any bus terminal, i.e. on any [[tangible interface element|UI-Element]].
 &rarr; CommandInvocationAnalysis
 
@@ -4146,7 +4138,7 @@ The general idea is, that each facade interface actually provides access to a sp
 |~|DisplayFacade|pushing frames to a display/viewer|
 |Proc|SessionFacade|session lifecycle|
 |~|EditFacade|edit operations, object mutations|
-|~|PlayerDummy|player mockup, maybe move to backend?|
+|~|PlayerDummy|player mockup, maybe move to Vault?|
 |//Lumiera's major interfaces//|c
 
 
@@ -4178,13 +4170,13 @@ The ''shut down'' sequence does exactly that: halt processing and rendering, dis
 
-
Opening and accessing media files on disk poses several problems, most of which belong to the domain of Lumiera's data backend. Here, we focus on the questions related to making media data available to the session and the render engine. Each media will be represented by an MediaAsset object, which indeed could be a compound object (in case of MultichannelMedia). Building this asset object thus includes getting information from the real file on disk. For delegating this to the backend, we use the following query interface:
+
Opening and accessing media files on disk poses several problems, most of which belong to the domain of Lumiera's data Vault. Here, we focus on the questions related to making media data available to the session and the render engine. Each media will be represented by an MediaAsset object, which indeed could be a compound object (in case of MultichannelMedia). Building this asset object thus includes getting information from the real file on disk. For delegating this to the Vault, we use the following query interface:
 * {{{queryFile(char* name)}}} requests accessing the file and yields some (opaque) handle when successful.
 * {{{queryChannel(fHandle, int)}}} will then be issued in sequence with ascending index numbers, until it returns {{{NULL}}}.
 * the returned struct (pointer) will provide the following information:
 ** some identifier which can be used to create a name for the corresponding media (channel) asset
 ** some identifier characterizing the access method (codec) needed to get at the media data. This should be rather a high level description of the media stream type, e.g. "H264"
-** some (opaque) handle usable for accessing this specific stream. When the render engine later on pulls data for this channel, it will pass this handle down to the backend.
+** some (opaque) handle usable for accessing this specific stream. When the render engine later on pulls data for this channel, it will pass this handle down to the Vault.
 
 {{red{to be defined in more detail later...}}}
 
@@ -4311,9 +4303,9 @@ __10/2008__: the allocation mechanism can surely be improved later, but for now
 
-
The Proc-Layer is designed such as to avoid unnecessary assumptions regarding the properties of the media data and streams. Thus, for anything which is not completely generic, we rely on an abstract [[type description|StreamTypeDescriptor]], which provides a ''Facade'' to an actual library implementation. This way, the fundamental operations can be invoked, like allocating a buffer to hold media data.
+
The Steam-Layer is designed such as to avoid unnecessary assumptions regarding the properties of the media data and streams. Thus, for anything which is not completely generic, we rely on an abstract [[type description|StreamTypeDescriptor]], which provides a ''Facade'' to an actual library implementation. This way, the fundamental operations can be invoked, like allocating a buffer to hold media data.
 
-In the context of Lumiera and especially in the Proc-Layer, __media implementation library__ means
+In the context of Lumiera and especially in the Steam-Layer, __media implementation library__ means
 * a subsystem which allows to work with media data of a specific kind
 * such as to provide the minimal set of operations
 ** allocating a frame buffer
@@ -4326,7 +4318,7 @@ Because we deliberately won't make any asumptions about the implementation libra
 It would be possible to circumvent this problem by requiring all supported implementation libraries to be known at compile time, because then the actual media implementation type could be linked to a facade type by generic programming. Indeed, Lumiera follows this route with regards to the possible kinds of MObject or [[Asset]] &mdash; but to the contraty, for the problem in question here, being able to include support for a new media data type just by adding a plugin by far outweights the benefits of compile-time checked implementation type selection. So, as a consequence of this design decision we //note the possibility of the media file type discovery code to be misconfigured// and select the //wrong implementation library at runtime.// And thus the render engine needs to be prepared for the source reading node of any pipe to flounder completely, and protect the rest of the system accordingly
 
-
+
Of course: Cinelerra currently leaks memory and crashes regularilly. For the newly written code, besides retaining the same level of performance, a main goal is to use methods and techniques known to support the writing of quality code. So, besides the MultithreadConsiderations, a solid strategy for managing the ownership of allocated memory blocks is necessary right from start.
 
 !Problems
@@ -4351,12 +4343,12 @@ For the case here in question this seems to be the __R__esource __A__llocation _
 
 !!!rather harmless
 * Frames (buffers), because they belong to a given [[RenderProcess (=StateProxy)|StateProxy]] and are just passed in into the individual [[ProcNode]]s. This can be handled consistently with conventional methods.
-* each StateProxy belongs to one top-level call to the [[Controller-Facade|Controller]]
+* each StateProxy belongs to one top-level call to the ~Controller-Facade
 * similar for the builder tools, which belong to a build process. Moreover, they are pooled and reused.
 * the [[sequences|Sequence]] and the defined [[assets|Asset]] belong together to one [[Session]]. If the Session is closed, this means a internal shutdown of the whole ProcLayer, i.e. closing of all GUI representations and terminating all render processes. If these calles are implemented as blocking operations, we can assert that as long as any GUI representation or any render process is running, there is a valid session and model.
 
 !using Factories
-And, last but not least, doing large scale allocations is the job of the backend. Exceptions being long-lived objects, like the session or the sequences, which are created once and don't bear the danger of causing memory pressure. Generally speaking, client code shouldn't issue "new" and "delete" when it comes in handy. Questions of setup and lifecycle should allways be delegated, typically through the usage of some [[factory|Factories]], which might return the product conveniently wrapped into a RAII style handle. Memory allocation is crucial for performance, and needs to be adapted to the actual platform -- which is impossible unless abstracted and treated as a separate concern.
+And, last but not least, doing large scale allocations is the job of the Vault. Exceptions being long-lived objects, like the session or the sequences, which are created once and don't bear the danger of causing memory pressure. Generally speaking, client code shouldn't issue "new" and "delete" when it comes in handy. Questions of setup and lifecycle should allways be delegated, typically through the usage of some [[factory|Factories]], which might return the product conveniently wrapped into a RAII style handle. Memory allocation is crucial for performance, and needs to be adapted to the actual platform -- which is impossible unless abstracted and treated as a separate concern.
 
@@ -4381,7 +4373,7 @@ For each meta asset instance, initially a //builder// is created for setting up
-
Lumiera's Proc-Layer is built around //two interconnected models,// mediated by the [[Builder]]. Basically, the &rarr;[[Session]] is an external interface to the HighLevelModel, while the &rarr;RenderEngine operates the structures of the LowLevelModel.
+
Lumiera's Steam-Layer is built around //two interconnected models,// mediated by the [[Builder]]. Basically, the &rarr;[[Session]] is an external interface to the HighLevelModel, while the &rarr;RenderEngine operates the structures of the LowLevelModel.
Our design of the models (both [[high-level|HighLevelModel]] and [[low-level|LowLevelModel]]) relies partially on dependent objects being kept consistently in sync. Currently (2/2010), __ichthyo__'s assessment is to consider this topic not important and pervasive enough to justify building a dedicated solution, like e.g. a central tracking and registration service. An important point to consider with this assessment is the fact that the session implementation is deliberately kept single-threaded. While this simplifies reasoning, we also lack one central place to handle this issue, and thus care has to be taken to capture and treat all the relevant individual dependencies properly at the implementation level.
@@ -4789,7 +4781,7 @@ This is possible because the operation point has been provided (by the mould) wi
 
-
An ever recurring problem in the design of Luimiera's ~Proc-Layer is how to refer to output destinations, and how to organise them.
+
An ever recurring problem in the design of Luimiera's ~Steam-Layer is how to refer to output destinations, and how to organise them.
 Wiring the flexible interconnections between the [[pipes|Pipe]] should take into account both the StreamType and the specific usage context ([[scope|PlacementScope]]) -- and the challenge is to avoid hard-linking of connections and tangling with the specifics of the target to be addressed and connected. This page, started __6/2010__ by collecting observations to work out the relations, arrives at defining a //key abstraction// of output management.
 
 !Observations
@@ -4946,7 +4938,7 @@ First and foremost, mapping can be seen as a //functional abstraction.// As it's
 
Within the Lumiera player and output subsystem, actually sending data to an external output requires to allocate an ''output slot''
-This is the central metaphor for the organisation of actual (system level) outputs; using this concept allows to separate and abstract the data calculation and the organisation of playback and rendering from the specifics of the actual output sink. Actual output possibilities (video in GUI window, video fullscreen, sound, Jack, rendering to file) can be added and removed dynamically from various components (backend, GUI), all using the same resolution and mapping mechanisms (&rarr; OutputManagement)
+This is the central metaphor for the organisation of actual (system level) outputs; using this concept allows to separate and abstract the data calculation and the organisation of playback and rendering from the specifics of the actual output sink. Actual output possibilities (video in GUI window, video fullscreen, sound, Jack, rendering to file) can be added and removed dynamically from various components (Vault, Stage), all using the same resolution and mapping mechanisms (&rarr; OutputManagement)
 
 !Properties of an output slot
 Each OutputSlot is an unique and distinguishable entity. It corresponds explicitly to an external output, or a group of such outputs (e.g. left and right soundcard output channels), or an output file or similar capability accepting media content. First off, an output slot needs to be provided, configured and registered, using an implementation for the kind of media data to be output (sound, video) and the special circumstances of the output capability (render a file, display video in a GUI widget, send video to a full screen display, establish a Jack port, just use some kind of "sound out"). An output slot is always limited to a single kind of media, and to a single connection unit, but this connection may still be comprised of multiple channels (stereoscopic video, multichannel sound).
@@ -5003,7 +4995,7 @@ Thus there are two serious problem situations
 
The OutputSlot interface describes a point where generated media data can actually be sent to the external world. It is expected to be implemented by adapters and bridges to existing drivers or external interface libraries, like a viewer widget in the GUI, ALSA or Jack sound output, rendering to file, using an external media format library. The design of this core facility was rather difficult and stretched out over quite some time span -- this page documents the considerations and the decisions taken.
 
 !intention
-OutpotSlot is a metaphor to unify the organisation of actual (system level) outputs; using this concept allows to separate and abstract the data calculation and the organisation of playback and rendering from the specifics of the actual output sink. Actual output possibilities (video in GUI window, video fullscreen, sound, Jack, rendering to file) can be added and removed dynamically from various components (backend, GUI), all using the same resolution and mapping mechanisms (&rarr; OutputManagement)
+OutpotSlot is a metaphor to unify the organisation of actual (system level) outputs; using this concept allows to separate and abstract the data calculation and the organisation of playback and rendering from the specifics of the actual output sink. Actual output possibilities (video in GUI window, video fullscreen, sound, Jack, rendering to file) can be added and removed dynamically from various components (Vault, Stage), all using the same resolution and mapping mechanisms (&rarr; OutputManagement)
 
 !design possibilities
 !!properties as a starting point
@@ -5092,7 +5084,7 @@ The rationale is for all states out-of-order to transition into the {{{BLOCKED}}
 
-
The Lumiera Processing Layer is comprised of various subsystems and can be separated into a low-level and a high-level part. At the low-level end is the [[Render Engine|OverviewRenderEngine]] which basically is a network of render nodes cooperating closely with the Backend Layer in order to carry out the actual playback and media transforming calculations. Whereas on the high-level side we find several different [[Media Objects|MObjects]] that can be placed into the session, edited and manipulated. This is complemented by the [[Asset Management|Asset]], which is the "bookkeeping view" of all the different "things" within each [[Session|SessionOverview]].
+
The Lumiera Processing Layer is comprised of various subsystems and can be separated into a low-level and a high-level part. At the low-level end is the [[Render Engine|OverviewRenderEngine]] which basically is a network of render nodes cooperating closely with the Vault Layer in order to carry out the actual playback and media transforming calculations. Whereas on the high-level side we find several different [[Media Objects|MObjects]] that can be placed into the session, edited and manipulated. This is complemented by the [[Asset Management|Asset]], which is the "bookkeeping view" of all the different "things" within each [[Session|SessionOverview]].
 
 There is rather strong separation between these two levels, and &mdash; <br/>correspondingly you'll encounter the data held within the Processing Layer organized in two different views, the ''high-level-model'' and the ''low-level-model''
 * from users (and GUI) perspective, you'll see a [[Session|SessionOverview]] with a timeline-like structure, where various [[Media Objects|MObjects]] are arranged and [[placed|Placement]]. By looking closer, you'll find that there are data connections and all processing is organized around processing chains or [[pipes|Pipe]], which can be either global (in the Session) or local (in real or [[virtual|VirtualClip]] clips)
@@ -5101,10 +5093,10 @@ There is rather strong separation between these two levels, and &mdash; <
 [img[Block Diagram|uml/fig128005.png]]
 
-
-
Render Engine, [[Builder]] and [[Controller]] are closely related Subsystems. Actually, the [[Builder]] //creates// a newly configured Render Engine //for every// RenderProcess. Before doing so, it queries from the Session (or, to be more precise, from the [[Fixture]] within the current Session) all necessary Media Object Placement information. The [[Builder]] then derives from this information the actual assembly of [[Processing Nodes|ProcNode]] comprising the Render Engine. Thus:
+
+
Render Engine, [[Builder]] and [[Dispatcher(Controller)|SteamDispatcher]] are closely related components. Actually, the [[Builder]] //creates// a newly configured Render Engine //for every// RenderProcess. Before doing so, it queries from the Session (or, to be more precise, from the [[Fixture]] within the current Session) all necessary Media Object Placement information. The [[Builder]] then derives from this information the actual assembly of [[Processing Nodes|ProcNode]] comprising the Render Engine. Thus:
  * the source of the build process is a sequence of absolute (explicit) [[Placements|Placement]] called the [[Playlist]]
- * the [[build process|BuildProcess]] is driven, configured and controlled by the [[Controller]] subsystem component. It encompasses the actual playback configuration and State of the System.
+ * the [[build process|BuildProcess]] is driven, configured and controlled by [[a controller|SteamDispatcher]] subsystem component. It encompasses the actual playback configuration and State of the System.
  * the resulting Render Engine is a list of [[Processors]], each configured to calculate a segment of the timeline with uniform properties. Each of these Processors in turn is a graph of interconnected ProcNode.s.
 
 see also: RenderEntities, [[two Examples (Object diagrams)|Examples]] 
@@ -5715,7 +5707,7 @@ So basically placements represent a query interface: you can allways ask the pla
 The fact of being placed in the [[Session|SessionOverview]] is constitutive for all sorts of [[MObject]]s, without Placement they make no sense. Thus &mdash; technically &mdash; Placements act as ''smart pointers''. Of course, there are several kinds of Placements and they are templated on the type of MObject they are refering to. Placements can be //aggregated// to increasingly constrain the resulting "location" of the refered ~MObject. See &rarr; [[handling of Placements|PlacementHandling]] for more details
 
 !Placements as instance
-Effectively, the placement of a given MObject into the Session acts as setting up an concrete instance of this object. This way, placements exhibit a dual nature. When viewed on themselves, like any reference or smart-pointer they behave like values. But, by adding a placement to the session, we again create a unique distinguishable entity with reference semantics: there could be multiple placements of the same object but with varying placement properties. Such a placement-bound-into-the-session is denoted by an generic placement-ID or (as we call it) &rarr; PlacementRef; behind the scenes there is a PlacementIndex keeping track of those "instances" &mdash; allowing us to hand out the PlacementRef (which is just an opaque id) to client code outside the Proc-Layer and generally use it as an shorthand, behaving as if it was an MObject instance
+Effectively, the placement of a given MObject into the Session acts as setting up an concrete instance of this object. This way, placements exhibit a dual nature. When viewed on themselves, like any reference or smart-pointer they behave like values. But, by adding a placement to the session, we again create a unique distinguishable entity with reference semantics: there could be multiple placements of the same object but with varying placement properties. Such a placement-bound-into-the-session is denoted by an generic placement-ID or (as we call it) &rarr; PlacementRef; behind the scenes there is a PlacementIndex keeping track of those "instances" &mdash; allowing us to hand out the PlacementRef (which is just an opaque id) to client code outside the Steam-Layer and generally use it as an shorthand, behaving as if it was an MObject instance
 
@@ -5976,8 +5968,8 @@ Right within the play process, there is a separation into two realms, relying on &rarr; for overview see also OutputManagement
-
-
The [[Player]] is an independent [[Subsystem]] within Lumiera, located at Proc-Layer level. A more precise term would be "rendering and playback coordination subsystem". It provides the capability to generate media data, based on a high-level model object, and send this generated data to an OutputDesignation, creating an continuous and timing controlled output stream. Clients may utilise these functionality through the ''play service'' interface.
+
+
The [[Player]] is an independent [[Subsystem]] within Lumiera, located at Steam-Layer level. A more precise term would be "rendering and playback coordination subsystem". It provides the capability to generate media data, based on a high-level model object, and send this generated data to an OutputDesignation, creating an continuous and timing controlled output stream. Clients may utilise these functionality through the ''play service'' interface.
 
 !subject of performance
 Every play or render process will perfrom a part of the session. This part can be specified in varios ways, but in the end, every playback or render boils down to //performing some model ports.// While the individual model port as such is just an identifier (actually implemented as ''pipe-ID''), it serves as a common identifier used at various levels and tied into several related contexts. For one, by querying the [[Fixture]], the ModelPort leads to the actual ExitNode -- the stuff actually producing data when being pulled. Besides that, the OutputManager used for establishing the play process is able to resolve onto a real OutputSlot -- which, as a side effect, also yields the final data format and data implementation type to use for rendering or playback.
@@ -5999,7 +5991,7 @@ This is the core service provided by the player subsystem. The purpose is to cre
 :when provided with these two prerequisites, the play service is able to build a PlayProcess.
 :for clients, this process can be accessed and maintained through a PlayController, which acts as (copyable) handle and front-end.
 ;engine
-:the actual processing is done by the RenderEngine, which in itself is a compound of several services within VaultLayer and SteamLayer
+:the actual processing is done by the RenderEngine, which in itself is a compound of several services within VaultLayer and Steam-Layer
 :any details of this processing remain opaque for the clients; even the player subsystem just accesses the EngineFaçade
 
@@ -6024,7 +6016,7 @@ The player subsystem is currently about to be designed and built up; some time a
__Joelholdsworth__ and __Ichthyo__ created this player mockup in 1/2009 to find out about the implementation details regarding integration and colaboration between the layers. There is no working render engine yet, thus we use a ~DummyImageGenerator for creating faked yuv frames to display. Within the GUI, there is a ~PlaybackController hooked up with the transport controls on the timeline pane. 
 # first everything was contained within ~PlaybackController, which spawns a thread for periodically creating those dummy frames
-# then, a ~PlayerService was factored out, now implemented within ~Proc-Layer (later to delegate to the emerging real render engine implementation).<br/>A new LayerSeparationInterface called ''~DummyPlayer'' was created and set up as a [[Subsystem]] within main().
+# then, a ~PlayerService was factored out, now implemented within ~Steam-Layer (later to delegate to the emerging real render engine implementation).<br/>A new LayerSeparationInterface called ''~DummyPlayer'' was created and set up as a [[Subsystem]] within main().
 # the next step was to support multiple playback processes going on in parallel. Now, the ~PlaybackController holds an smart-handle to the ~PlayProcess currently generating output for this viewer, and invokes the transport control functions and the pull frame call on this handle.
 # then, also the tick generation (and thus the handling of the thread which pulls the frames) was factored out and pushed down into the mentioned ~PlayProcess. For this to work, the ~PlaybackController now makes a display slot available on the public GUI DisplayFacade interface, so the ~PlayProcessImpl can push up the frames for display within the GUI
 [img[Overview to the dummy player operation|draw/playerArch1.png]]
@@ -6049,7 +6041,7 @@ There can be multiple viewer widgets, to be connected dynamically to multiple pl
 
Playlist is a sequence of individual Render Engine Processors able to render a segment of the timeline. So, together these Processors are able to render the whole timeline (or part of the timeline if only a part has to be rendered).
 
-//Note, we have yet to specify how exactly the building and rendering will work together with the backend. There are several possibilities how to structure the Playlist//
+//Note, we have yet to specify how exactly the building and rendering will work together with the Vault. There are several possibilities how to structure the Playlist//
 
@@ -6107,7 +6099,7 @@ One example of this problem is the [[handling of multichannel media|Multichannel !!Parallelism We need to work out guidelines for dealing with operations going on simultaneously. Certainly, this will divide the application in several different regions. As always, the primary goal is to avoid multithread problems altogether. Typically, this can be achieved by making matters explicit: externalizing state, make the processing subsystems stateless, queue and schedule tasks, use isolation layers. -* the StateProxy is a key for the individual render processes state, which is managed in separate [[StateFrame]]s in the backend. The [[processing network|ProcNode]] is stateless. +* the StateProxy is a key for the individual render processes state, which is managed in separate [[StateFrame]]s in the Vault. The [[processing network|ProcNode]] is stateless. * the [[Fixture]] provides an isolation layer between the render engine and the Session / high-level model * all EditingOperations are not threadsafe intentionally, because they are [[scheduled|ProcLayerScheduler]] @@ -6129,53 +6121,8 @@ Besides, they provide an __inward interface__ for the [[ProcNode]]s, enabling th [img[Asset Classess|uml/fig131077.png]] {{red{Note 3/2010}}} it is very unlikely we'll organise the processing nodes as a class hierarchy. Rather it looks like we'll get several submodules/special capabilities configured in within the Builder
-
-
//The guard and coordinator of any operation within the session subsystem.//
-The session and related components work effectively single threaded. Any tangible operation on the session data structure has to be enqueued as [[command|CommandHandling]] into the dispatcher. Moreover, the [[Builder]] is triggered from the ProcDispatcher; and while the Builder is running, any command processing is halted. The Builder in turn creates or reshapes the processing nodes network, and the changed network is brought into operation with a //transactional switch// -- while render processes on this processing network operate unaffected and essentially multi-threaded.
-
-Enqueueing commands through the SessionCommandFacade into the ProcDispatcher is the official way to cause changes to the session. And the running state of the ProcDispatcher is equivalent with the running state of the //session subsystem as a whole.//
-
-!Requirements
-To function properly as action coordinator of the session subsystem, the dispatcher has to fulfil multiple demands
-;enqueue
-:accept and enqueue command messages concurrently, any time, without blocking the caller
-:*FIFO for //regular commands//
-:*LIFO for //priority requests//  {{red{unimplemented 1/17}}}
-;process
-:dequeue and process entries sequentially
-;sleep
-:work continuously until queue is empty, then enter wait state
-;check point
-:arrive at a well defined check point reliably non blocking ("ensure to make progress")
-:* necessary to know when internal state is consistent
-:* when?
-:** after each command
-:** after builder run
-:** after wake-up
-;manage
-:care for rectifying entries in the queue
-:* ensure they //match// current session, discard obsoleted requests
-:* //aggregate// similar requests
-:* //supersede// by newer commands of a certain kind
-
-!Operational semantics
-The ProcDispatcher is a component with //running state.// There is some kind of working loop, which possibly enters a sleep state when idle. In fact, this loop is executed ''exclusively in the session thread''. This is the very essence of treating the session entirely single threaded, thus evading all the complexities of parallelism. Consequently, the session thread will either
-* execute a command on the session
-* perform the [[Builder]]
-* evaluate loop control logic in the ProcDispatcher
-* block waiting in the ProcDispatcher
-
-Initially the command queue is empty and the ProcDispatcher can be considered idle. Whenever more commands are available in the queue, the dispatcher will handle them one after another, without delay, until the queue is emptied. Yet the Builder run need to be kept in mind. Essentially, the builder models a //dirty state:// whenever a command has touched the session, the corresponding LowLevelModel must be considered out of sync, possibly not reflecting the intended semantics of the session anymore. From a strictly logical view angle, we'd need to trigger the builder after each and every session command -- but it was a very fundamental design decision in Lumiera to allow for a longer running build process, more akin to running a compiler. This decision opens all the possibilities of integrating a knowledge based system and resolution activities to find a solution to match the intended session semantics. For this reason, we decouple the UI actions from session and render engine consistency, and we enqueue session commands, to throttle down the number of builder runs.
-
-So the logic to trigger builder runs has to take some leeway into account. Due to the typical interactive working style of an editing application, session commands might be trickling in in strikes of similar commands, intermingled with tiny pauses. For this reason, the ProcDispatcher implements some //hysteresis,// as far as triggering the builder runs is concerned. The builder is fired in idle state, but only after passing some //latency period.// On the other hand, massive UI activities (especially during a builder run) may have flooded the queue, thus sending the session into an extended period of command processing. From the user's view angle, the application looks non responsive in such a case, albeit not frozen, since the UI can still enqueue further commands and thus retains the ability to react locally on user interaction. To mitigate this problem, the builder should be started anyway after some extended period of command processing, even if the queue is not yet emptied. Each builder run produces a structural diff message sent towards the UI and thus causes user visible changes within the session's UI representation. This somewhat stuttering response conveys to the user a tangible sensation of ongoing activity, while communicating at the same time, at least subconsciously some degree of operational overload. {{red{note 12/2016 builder is not implemented, so consider this planning}}}
-
-Any change to the circumstances determining the ProcDispatcher's behaviour needs to be imparted actively through the public interface -- the dispatcher is not designed to be a state listener or observer. Any such state change notifications are synchronised and cause a wakeup notification to the session thread. For this purpose, enqueuing of further commands counts as state change and is lock protected. Beyond that, any other activities, like //processing// of commands or builder runs, are performed within the session thread without blocking other threads; the locking on the ProcDispatcher is only ever short term to ensure consistent internal state. Clients need to be prepared for the effect of actions to appear asynchronously and with some delay. Especially this means that session switch or shutdown has to await completion of any session command or builder run currently in progress.
-
-When the session is closed or dismantled, further processing in the ProcDispatcher will be disabled, after completing the current command or builder run. This disabled state can be reversed when a new session instance becomes operative. And while the dispatcher will then continue to empty the command queue, most commands in queue will probably be obsoleted and dropped, because of referring to a deceased session instance. Moreover, the lifecycle of the session instances has to be distinguished from the lifecycle of the SessionSubsystem as such. When the latter is terminated, be it by a fatal error in some builder run, or be it due to general shutdown of the application, the ProcDispatcher will be asked to terminate the session thread after completing the current activity in progress. Such an event will also discard any further commands waiting in the dispatcher's queue.
-
-
-
The middle Layer of our current Architecture plan, i.e. the layer managing all processing and manipulation, while the actual data handling is done in the backend and the user interaction belongs to the GUI Layer.
+
The middle Layer of our current Architecture plan, i.e. the layer managing all processing and manipulation, while the actual data handling is done in the Vault and the user interaction belongs to the GUI Layer.
 
 &rarr; see the [[Overview]]
 
@@ -6411,7 +6358,7 @@ But for now the decision is to proceed with isolated and specialised QueryResolv
 
-
Within the Lumiera Proc-Layer, there is a general preference for issuing [[queries|Query]] over hard wired configuration (or even mere table based configuration). This leads to the demand of exposing a //possibility to issue queries// &mdash; without actually disclosing much details of the facility implementing this service. For example, for shaping the general session interface (in 10/09), we need a means of exposing a hook to discover HighLevelModel contents, without disclosing how the model is actually organised internally (namely by using an PlacementIndex).
+
Within the Lumiera Steam-Layer, there is a general preference for issuing [[queries|Query]] over hard wired configuration (or even mere table based configuration). This leads to the demand of exposing a //possibility to issue queries// &mdash; without actually disclosing much details of the facility implementing this service. For example, for shaping the general session interface (in 10/09), we need a means of exposing a hook to discover HighLevelModel contents, without disclosing how the model is actually organised internally (namely by using an PlacementIndex).
 
 !Analysis of the problem
 The situation can be decomposed as follows.[>img[QueryResolver|uml/fig137733.png]]
@@ -6673,13 +6620,13 @@ At first sight the link between asset and clip-MO is a simple logical relation b
 {{red{Note 1/2015}}} several aspects regarding the relation of clips and single/multichannel media are not yet settled. There is a preliminary implementation in the code base, but it is not sure yet how multichnnel media will actually be modelled. Currently, we tend to treat the channel multiplicity rather as a property of the involved media, i.e we have //one// clip object.
-
Conceptually, the Render Engine is the core of the application. But &mdash; surprisingly &mdash; we don't even have a distinct »~RenderEngine« component in our design. Rather, the engine is formed by the cooperation of several components spread out over two layers (Backend and Proc-Layer): The [[Builder]] creates a network of [[render nodes|ProcNode]], the [[Scheduler]] triggers individual [[calculation jobs|RenderJob]], which in turn pull data from the render nodes, thereby relying on the [[Backend's services|VaultLayer]] for data access and using plug-ins for the actual media calculations.
+
Conceptually, the Render Engine is the core of the application. But &mdash; surprisingly &mdash; we don't even have a distinct »~RenderEngine« component in our design. Rather, the engine is formed by the cooperation of several components spread out over two layers (Vault and Steam-Layer): The [[Builder]] creates a network of [[render nodes|ProcNode]], the [[Scheduler]] triggers individual [[calculation jobs|RenderJob]], which in turn pull data from the render nodes, thereby relying on the [[Vault services|VaultLayer]] for data access and using plug-ins for the actual media calculations.
 &rarr; OverviewRenderEngine
 &rarr; EngineFaçade 
 
-
-
The [[Render Engine|Rendering]] only carries out the low-level and performance critical tasks. All configuration and decision concerns are to be handled by [[Builder]] and [[Controller]]. While the actual connection of the Render Nodes can be highly complex, basically each Segment of the Timeline with uniform characteristics is handled by one Processor, which is a graph of [[Processing Nodes|ProcNode]] discharging into a ExitNode. The Render Engine Components as such are //stateless// themselves; for the actual calculations they are combined with a StateProxy object generated by and connected internally to the [[Controller]], while at the same time holding the Data Buffers (Frames) for the actual calculations.
+
+
The [[Render Engine|Rendering]] only carries out the low-level and performance critical tasks. All configuration and decision concerns are to be handled by [[Builder]] and [[Dispatcher|SteamDispatcher]]. While the actual connection of the Render Nodes can be highly complex, basically each Segment of the Timeline with uniform characteristics is handled by one Processor, which is a graph of [[Processing Nodes|ProcNode]] discharging into a ExitNode. The Render Engine Components as such are //stateless// themselves; for the actual calculations they are combined with a StateProxy object generated by and connected internally to the Controller {{red{really?? 2018}}}, while at the same time holding the Data Buffers (Frames) for the actual calculations.
 
 {{red{Warning: obsolete as of 9/11}}}
 Currently the Render/Playback is beeing targetted for implementation; almost everything in this diagram will be implemented in a slightly differently way....
@@ -6805,8 +6752,8 @@ __see also__
 
-
For each segment (of the effective timeline), there is a Processor holding the exit node(s) of a processing network, which is a "Directed Acyclic Graph" of small, preconfigured, stateless [[processing nodes|ProcNode]]. This network is operated according to the ''pull principle'', meaning that the rendering is just initiated by "pulling" output from the exit node, causing a cascade of recursive downcalls or prerequisite calculations to be scheduled as individual [[jobs|RenderJob]]. Each node knows its predecessor(s), thus the necessary input can be pulled from there. Consequently, there is no centralized "engine object" which may invoke nodes iteratively or table driven &mdash; rather, the rendering can be seen as a passive service provided for the backend, which may pull from the exit nodes at any time, in any order (?), and possibly multithreaded.
-All State necessary for a given calculation process is encapsulated and accessible by a StateProxy object, which can be seen as the representation of "the process". At the same time, this proxy provides the buffers holding data to be processed and acts as a gateway to the backend to handle the communication with the Cache. In addition to this //top-level State,// each calculation step includes a small [[state adapter object|StateAdapter]] (stack allocated), which is pre-configured by the builder and serves the purpose to isolate the processing function from the detals of buffer management.
+
For each segment (of the effective timeline), there is a Processor holding the exit node(s) of a processing network, which is a "Directed Acyclic Graph" of small, preconfigured, stateless [[processing nodes|ProcNode]]. This network is operated according to the ''pull principle'', meaning that the rendering is just initiated by "pulling" output from the exit node, causing a cascade of recursive downcalls or prerequisite calculations to be scheduled as individual [[jobs|RenderJob]]. Each node knows its predecessor(s), thus the necessary input can be pulled from there. Consequently, there is no centralized "engine object" which may invoke nodes iteratively or table driven &mdash; rather, the rendering can be seen as a passive service provided for the Vault, which may pull from the exit nodes at any time, in any order (?), and possibly multithreaded.
+All State necessary for a given calculation process is encapsulated and accessible by a StateProxy object, which can be seen as the representation of "the process". At the same time, this proxy provides the buffers holding data to be processed and acts as a gateway to the Vault to handle the communication with the Cache. In addition to this //top-level State,// each calculation step includes a small [[state adapter object|StateAdapter]] (stack allocated), which is pre-configured by the builder and serves the purpose to isolate the processing function from the detals of buffer management.
 
 
 __see also__
@@ -6816,9 +6763,9 @@ __see also__
 
-
The rendering of input sources to the desired output ports happens within the &raquo;''Render Engine''&laquo;, which can be seen as a collaboration of Proc-Layer, Backend together with external/library code for the actual data manipulation. In preparation of the RenderProcess, the [[Builder]] as wired up a network of [[processing nodes|ProcNode]] called the ''low-level model'' (in contrast to the high-level model of objects placed within the session). Generally, this network is a "Directed Acyclic Graph" starting at the //exit nodes// (output ports) and pointing down to the //source readers.// In Lumiera, rendering is organized according to the ''pull principle'': when a specific frame of rendered data is requested from an exit node, a recursive calldown happens, as each node asks his predecessor(s) for the necessary input frame(s). This may include pulling frames from various input sources and for several time points, thus pull rendering is more powerful (but also more difficult to understand) than push rendering, where the process would start out with a given source frame.
+
The rendering of input sources to the desired output ports happens within the &raquo;''Render Engine''&laquo;, which can be seen as a collaboration of Steam-Layer, Vault together with external/library code for the actual data manipulation. In preparation of the RenderProcess, the [[Builder]] as wired up a network of [[processing nodes|ProcNode]] called the ''low-level model'' (in contrast to the high-level model of objects placed within the session). Generally, this network is a "Directed Acyclic Graph" starting at the //exit nodes// (output ports) and pointing down to the //source readers.// In Lumiera, rendering is organized according to the ''pull principle'': when a specific frame of rendered data is requested from an exit node, a recursive calldown happens, as each node asks his predecessor(s) for the necessary input frame(s). This may include pulling frames from various input sources and for several time points, thus pull rendering is more powerful (but also more difficult to understand) than push rendering, where the process would start out with a given source frame.
 
-Rendering can be seen as a passive service available to the Backend, which remains in charge what to render and when. Render processes may be running in parallel without any limitations. All of the storage and data management falls into the realm of the Backend. The render nodes themselves are ''completely stateless'' &mdash; if some state is necessary for carrying out the calculations, the backend will provide a //state frame// in addition to the data frames.
+Rendering can be seen as a passive service available to the Vault, which remains in charge what to render and when. Render processes may be running in parallel without any limitations. All of the storage and data management falls into the realm of the Vault. The render nodes themselves are ''completely stateless'' &mdash; if some state is necessary for carrying out the calculations, the Vault will provide a //state frame// in addition to the data frames.
a special kind of [[render job|RenderJob]], used to retrieve input data relying on external IO.
@@ -6836,7 +6783,7 @@ For now, the above remains in the status of a general concept and typical soluti
 Later on we expect a distinct __query subsystem__ to emerge, presumably embedding a YAP Prolog interpreter.
-
A facility allowing the Proc-Layer to work with abstracted [[media stream types|StreamType]], linking (abstract or opaque) [[type tags|StreamTypeDescriptor]] to an [[library|MediaImplLib]], which provides functionality for acutally dealing with data of this media stream type. Thus, the stream type manager is a kind of registry of all the external libraries which can be bridged and accessed by Lumiera (for working with media data, that is). The most basic set of libraries is instelled here automatically at application start, most notably the [[GAVL]] library for working with uncompressed video and audio data. //Later on, when plugins will introduce further external libraries, these need to be registered here too.//
+
A facility allowing the Steam-Layer to work with abstracted [[media stream types|StreamType]], linking (abstract or opaque) [[type tags|StreamTypeDescriptor]] to an [[library|MediaImplLib]], which provides functionality for acutally dealing with data of this media stream type. Thus, the stream type manager is a kind of registry of all the external libraries which can be bridged and accessed by Lumiera (for working with media data, that is). The most basic set of libraries is instelled here automatically at application start, most notably the [[GAVL]] library for working with uncompressed video and audio data. //Later on, when plugins will introduce further external libraries, these need to be registered here too.//
A scale grid controls the way of measuring and aligining a quantity the application has to deal with. The most prominent example is the way to handle time in fixed atomic chunks (''frames'') addressed through a fixed format (''timecode''): while internally the application uses time values of sufficiently fine grained resolution, the acutally visible timing coordinates of objects within the session are ''quantised'' to some predefined and fixed time grid.
@@ -6927,7 +6874,7 @@ When [[building the fixture|BuildFixture]], ~MObjects -- as handled by their Pla
 ;(2) commit stage
 : -- after the build process(es) are completed, the new fixture gets ''committed'', thus becoming the officially valid state to be rendered. As render processes might be going on in parallel, some kind of locking or barrier is required. It seems advisable to make the change into a single atomic hot-swap. Meaning we'd get a single access point to be protected. But there is another twist: We need to find out which render processes to cancel and restart, to pick up the changes introduced by this build process -- which might include adding and deleting of timelines as a whole, and any conceivable change to the segmentation grid. Because of the highly dynamic nature of the placements, on the other hand it isn't viable to expect the high-level model to provide this information. Thus we need to find out about a ''change coverage'' at this point. We might expand on that idea to //prune any new segments which aren't changed.// This way, only a write barrier would be necessary on switching the actually changed segments, and any render processes touching these would be //tainted.// Old allocations could be released after all tainted processes are known to be terminated.
 ;(3) rendering use
-:Each play/render process employs a ''frame dispatch step'' to get the right exit node for pulling a given frame (&rarr; [[Dispatcher|FrameDispatcher]]). From there on, the process proceeds into the [[processing nodes|ProcNode]], interleaved with backend/scheduler actions due to splitting into individually scheduled jobs. The storage of these processing nodes and accompanying wiring descriptors is hooked up behind the individual segments, by sharing a common {{{AllocationCluster}}}. Yet the calculation of individual frames also depends on ''parameters'' and especially ''automation'' linked with objects in the high-level model. It is likely that there might be some sharing or some kind of additional communication interface, as the intention was to allow ''live changes'' to automated values. <br/>{{red{WIP 12/2010}}} details need to be worked out. &rarr; [[parameter wiring concept|Wiring]]
+:Each play/render process employs a ''frame dispatch step'' to get the right exit node for pulling a given frame (&rarr; [[Dispatcher|FrameDispatcher]]). From there on, the process proceeds into the [[processing nodes|ProcNode]], interleaved with Vault/scheduler actions due to splitting into individually scheduled jobs. The storage of these processing nodes and accompanying wiring descriptors is hooked up behind the individual segments, by sharing a common {{{AllocationCluster}}}. Yet the calculation of individual frames also depends on ''parameters'' and especially ''automation'' linked with objects in the high-level model. It is likely that there might be some sharing or some kind of additional communication interface, as the intention was to allow ''live changes'' to automated values. <br/>{{red{WIP 12/2010}}} details need to be worked out. &rarr; [[parameter wiring concept|Wiring]]
 !!!observations
 * Storage and initialisation for explicit placements is an issue. We should strive at making that inline as much as possible.
 * the overall segmentation emerges from a sorting of time points, which are start points of explicit placements
@@ -6965,7 +6912,7 @@ A sequence is always tied to a root-placed track, it can't exist without such. W
 
A helper to implement a specific memory management scheme for playback and rendering control data structures.
-In this context, model and management data is structured into [[Segments|Segmentation]] of similar configuration within the project timeline. Beyond logical reasoning, these segments also serve as ''extents'' for memory allocation. Which leads to the necessity of [[segment related memory management|FixtureStorage]]. The handling of actual media data buffers is outside the realm of this topic; these are managed by the frame cache within the backend.
+In this context, model and management data is structured into [[Segments|Segmentation]] of similar configuration within the project timeline. Beyond logical reasoning, these segments also serve as ''extents'' for memory allocation. Which leads to the necessity of [[segment related memory management|FixtureStorage]]. The handling of actual media data buffers is outside the realm of this topic; these are managed by the frame cache within the Vault.
 
 When addressing this task, we're facing several closely related concerns.
 ;throughput
@@ -7008,13 +6955,13 @@ The Session object is a singleton &mdash; actually it is a »~PImpl«-Facade
 
 
 !Session lifecycle
-The session lifecycle need to be distinguished from the state of the [[session subsystem|SessionSubsystem]]. The latter is one of the major components of Lumiera, and when it is brought up, the {{{SessionCommandFacade}}} is opened and the ProcDispatcher started. On the other hand, the session as such is a data structure and pulled up on demand, by the {{{SessionManager}}}. Whenever the session is fully populated and configured, the ProcDispatcher is instructed to //actually allow dispatching of commands towards the session.// This command dispatching mechanism is the actual access point to the session for clients outside Proc-Layer; when dispatching is halted, commands can be enqueued non the less, which allows for a reactive UI.
+The session lifecycle need to be distinguished from the state of the [[session subsystem|SessionSubsystem]]. The latter is one of the major components of Lumiera, and when it is brought up, the {{{SessionCommandFacade}}} is opened and the SteamDispatcher started. On the other hand, the session as such is a data structure and pulled up on demand, by the {{{SessionManager}}}. Whenever the session is fully populated and configured, the SteamDispatcher is instructed to //actually allow dispatching of commands towards the session.// This command dispatching mechanism is the actual access point to the session for clients outside Steam-Layer; when dispatching is halted, commands can be enqueued non the less, which allows for a reactive UI.
 
-
LayerSeparationInterface, provided by the Proc-Layer.
+
LayerSeparationInterface, provided by the Steam-Layer.
 The {{{SessionCommand}}} façade and the corresponding {{{proc::control::SessionCommandService}}} can be considered //the public interface to the session://
-They allow to send [[commands|CommandHandling]] to work on the session data structure. All these commands, as well as the [[Builder]], are performed in a dedicated thread, the »session loop thread«, which is operated by the ProcDispatcher. As a direct consequence, all mutations of the session data, as well as all logical consequences determined by the builder, are performed single-threaded, without the need to care for synchronisation issues. Another consequence of this design is the fact that running the builder disables session command processing, causing further commands to be queued up in the ProcDispatcher. Any structural changes resulting from builder runs will finally be pushed back up into the UI, asynchronously.
+They allow to send [[commands|CommandHandling]] to work on the session data structure. All these commands, as well as the [[Builder]], are performed in a dedicated thread, the »session loop thread«, which is operated by the SteamDispatcher. As a direct consequence, all mutations of the session data, as well as all logical consequences determined by the builder, are performed single-threaded, without the need to care for synchronisation issues. Another consequence of this design is the fact that running the builder disables session command processing, causing further commands to be queued up in the SteamDispatcher. Any structural changes resulting from builder runs will finally be pushed back up into the UI, asynchronously.
While the core of the persistent session state corresponds just to the HighLevelModel, there is additionaly attached state, annotations and specific bindings, which allow to connect the session model to the local application configuration on each system. A typical example would be the actual output channels, connections and drivers to use on a specific system. In a Studio setup, these setup and wiring might be quite complex, it may be specific to just a single project, and the user might want to work on the same project on different systems. This explains why we can't just embody these configuration information right into the actual model.
@@ -7061,7 +7008,7 @@ The session and the models rely on dependent objects beeing kept updated and con
"Session Interface" has several meanings, depending on the context.
 ;application global
 :the session is a data structure, which can be saved and loaded, and manipulated by [[sending commands|CommandHandling]]
-;within ~Proc-Layer
+;within ~Steam-Layer
 :here »the session« can be seen as a compound of several interfaces and facilities,
 :together forming the primary access point to the user visible contents and state of the editing project.
 :* the API of the session class
@@ -7076,7 +7023,7 @@ The session and the models rely on dependent objects beeing kept updated and con
 :** Automation
 :* the [[command handling framework|CommandHandling]], including the [[UNDO|UndoManager]] facility
 
-__Note__: the SessionInterface as such is //not a [[external public interface|LayerSeparationInterfaces]].// Clients from outside Proc-Layer can talk to the session by issuing commands through the {{{SessionCommandFacade}}}. Processing of commands is coordinated by the ProcDispatcher, which also is responsible for starting the [[Builder]].
+__Note__: the SessionInterface as such is //not a [[external public interface|LayerSeparationInterfaces]].// Clients from outside Steam-Layer can talk to the session by issuing commands through the {{{SessionCommandFacade}}}. Processing of commands is coordinated by the SteamDispatcher, which also is responsible for starting the [[Builder]].
 
 
 !generic and explicit API
@@ -7122,7 +7069,7 @@ While this protects against accessing dangling references, it can't prevent clie
 * regarding CommandHandling, the //design decision was to require a dedicated (and hand written) undo functor.//
 
 !!!!protection against accidental mutation
-{{red{WIP}}}As of 2/10, I am considering to add a protection against invoking an raw mutation operation accidentally, and especially bypassing the command frontend and the ProcDispatcher. This would not only be annoying (no UNDO), but potentially dangerous, because all of the session internals are not threadsafe by design.
+{{red{WIP}}}As of 2/10, I am considering to add a protection against invoking an raw mutation operation accidentally, and especially bypassing the command frontend and the SteamDispatcher. This would not only be annoying (no UNDO), but potentially dangerous, because all of the session internals are not threadsafe by design.
 The considered solution would be to treat this situation as if an authorisation is required; this authorisation for mutation could be checked by a &raquo;wormhole&laquo;-like context access (&rarr; aspect oriented programming). Of course, in our case we're not dealing with real access restrictions, just a safeguard: While command execution creates such an authorisation token automatically, a client actually wanting to invoke an mutation operations bypassing the command frontend, would need to set up such a token explicitly and manually.
 
 !!adding and destroying
@@ -7138,26 +7085,26 @@ When adding an object, a [[scope|PlacementScope]] needs to be specified. Thus it
 * how is all of this related to the LayerSeparationInterfaces, here SessionFacade und EditFacade?
 
 <<<
-__preliminary notes__: {{red{3/2010}}} Discovery functions accessible from the session API are always written such as to return ~MObjectRefs. These expose generic functions for modifying the structure: {{{attach(MObjectRef)}}} and {{{purge()}}}. The session API exposes variations of these functions. Actually, all these functions do dispatch the respective commands automatically. {{red{Note 1/2015 not implemented, not sure if thats a good idea}}} To the contrary, the raw functions for adding and removing placements are located on the PlacementIndex; they are accessible as SessionServices &mdash; which are intended for Proc-Layer's internal use solely. This separation isn't meant to be airtight, just an reminder for proper use.
+__preliminary notes__: {{red{3/2010}}} Discovery functions accessible from the session API are always written such as to return ~MObjectRefs. These expose generic functions for modifying the structure: {{{attach(MObjectRef)}}} and {{{purge()}}}. The session API exposes variations of these functions. Actually, all these functions do dispatch the respective commands automatically. {{red{Note 1/2015 not implemented, not sure if thats a good idea}}} To the contrary, the raw functions for adding and removing placements are located on the PlacementIndex; they are accessible as SessionServices &mdash; which are intended for Steam-Layer's internal use solely. This separation isn't meant to be airtight, just an reminder for proper use.
 
 Currently, I'm planning to modify MObjectRef to return only a const ref to the underlying facilities by default. Then, there would be a subclass which is //mutation enabled.// But this subclass will check for the presence of a mutation-permission token &mdash; which is exposed via thread local storage, but //only within a command dispatch.// Again, no attempt is made to make this barrier airtight. Indeed, for tests, the mutation-permission token can just be created in the local scope. After all, this is not conceived as an authorisation scheme, rather as a automatic sanity check. It's the liability of the client code to ensure any mutation is dispatched.
 <<<
 
-
The current [[Session]] is the root of any state found within Proc-Layer. Thus, events defining the session's lifecycle influence and synchronise the cooperative behaviour of the entities within the model, the ProcDispatcher, [[Fixture]] and any facility below.
+
The current [[Session]] is the root of any state found within Steam-Layer. Thus, events defining the session's lifecycle influence and synchronise the cooperative behaviour of the entities within the model, the SteamDispatcher, [[Fixture]] and any facility below.
 * when ''starting'', on first access an empty session is created, which puts any related facility into a defined initial state.
 * when ''closing'' the session, any dependent facilities are disabled, disconnected, halted or closed
 * ''loading'' an existing session &mdash; after closing the previous session &mdash; sets up an empty (default) session and populates it with de-serialised content.
-* when encountering a ''mutation point'', [[command processing|ProcDispatcher]] is temporarily halted to trigger off an BuildProcess.
+* when encountering a ''mutation point'', [[command processing|SteamDispatcher]] is temporarily halted to trigger off an BuildProcess.
 
 !Role of the session manager
-The SessionManager is responsible for conducting the session lifecycle. Accessible through the static interface {{{Session::current}}}, it exposes the actual session as a ~PImpl. Both session manager and session are indeed interfaces, backed by implementation classes belonging to ~Proc-Layer's internals. Loading, saving, resetting and closing are the primary public operations of the session manager, each causing the respective lifecycle event.
+The SessionManager is responsible for conducting the session lifecycle. Accessible through the static interface {{{Session::current}}}, it exposes the actual session as a ~PImpl. Both session manager and session are indeed interfaces, backed by implementation classes belonging to ~Steam-Layer's internals. Loading, saving, resetting and closing are the primary public operations of the session manager, each causing the respective lifecycle event.
 
 Beyond that, client code usually doesn't interact much with the lifecycle, which mostly is a pattern of events to happen in a well-defined sequence. So the //implementation// of the session management operations has to comply to this lifecycle, and does so by relying on a self-contained implementation service, the LifecycleAdvisor. But (contrary to an application framework) the lifecycle of the Lumiera session is rather fixed, the only possibility for configuration or extension being the [[lifecycle hooks|LifecycleEvent]], where other parts of the system (and even plug-ins) may install some callback methods.
 
 !Synchronising access to session's implementation facilities
-Some other parts and subsystems within the ~Proc-Layer need specialised access to implementation facilities within the session. Information about some conditions and configurations might be retrieved through [[querrying the session|Query]], and especially default configurations for many objects are [[bound to the session|DefaultsImplementation]]. The [[discovery of session contents|SessionStructureQuery]] relies on an [[index facility|PlacementIndex]] embedded within the session implementation. Moreover, some "properties" of the [[media objects|MObject]] are actually due to the respective object being [[placed|Placement]] in some way into the session; consequently, there might be an dependency on the actual [[location as visible to the placement|PlacementScope]], which in turn is constituted by [[querying the index|QueryFocus]].
+Some other parts and subsystems within the ~Steam-Layer need specialised access to implementation facilities within the session. Information about some conditions and configurations might be retrieved through [[querrying the session|Query]], and especially default configurations for many objects are [[bound to the session|DefaultsImplementation]]. The [[discovery of session contents|SessionStructureQuery]] relies on an [[index facility|PlacementIndex]] embedded within the session implementation. Moreover, some "properties" of the [[media objects|MObject]] are actually due to the respective object being [[placed|Placement]] in some way into the session; consequently, there might be an dependency on the actual [[location as visible to the placement|PlacementScope]], which in turn is constituted by [[querying the index|QueryFocus]].
 
 Each of these facilities relies on a separate access point to session services, corresponding to distinct service interfaces. But &mdash; on the implementation side &mdash; all these services are provided by a (compound) SessionServices implementation object. This approach allows to switch the actual implementation of all these services simply by swapping the ~PImpl maintained by the session manager. A new implementation level service can thus be added to the ~SessionImpl just by hooking it into the ~SessionServices compound object. But note, this mechanism as such is ''not thread safe'', unless the //implementation// of the invoked functions is synchronised in some way to prevent switching to a new session implementation while another thread is still executing session implementation code.
 
@@ -7191,16 +7138,16 @@ As detailed above, {{{Session::current}}} exposes the management / lifecycle API
 
The Session contains all information, state and objects to be edited by the User (&rarr;[[def|Session]]).
-As such, the SessionInterface is the main entrance point to Proc-Layer functionality, both for the primary EditingOperations and for playback/rendering processes. Proc-Layer state is rooted within the session and guided by the [[session's lifecycle events|SessionLifecycle]].
-Implementation facilities within the Proc-Layer may access a somewhat richer [[session service API|SessionServices]].
+As such, the SessionInterface is the main entrance point to Steam-Layer functionality, both for the primary EditingOperations and for playback/rendering processes. Steam-Layer state is rooted within the session and guided by the [[session's lifecycle events|SessionLifecycle]].
+Implementation facilities within the Steam-Layer may access a somewhat richer [[session service API|SessionServices]].
 
 Currently (as of 3/10), Ichthyo is working on getting a preliminary implementation of the [[Session in Memory|SessionDataMem]] settled.
 
 !Session, Model and Engine
-The session is a [[Subsystem]] and acts as a frontend to most of the Proc-Layer. But it doesn't contain much operational logic; its primary contents are the [[model|Model]], which is closely [[interconnected to the assets|AssetModelConnection]].
+The session is a [[Subsystem]] and acts as a frontend to most of the Steam-Layer. But it doesn't contain much operational logic; its primary contents are the [[model|Model]], which is closely [[interconnected to the assets|AssetModelConnection]].
 
 !Design and handling of Objects within the Session
-Objects are attached and manipulated by [[placements|Placement]]; thus the organisation of these placements is part of the session data layout. Effectively, such a placement within the session behaves like an //instances// of a given object, and at the same time it defines the "non-substantial" properties of the object, e.g. its positions and relations. [[References|MObjectRef]] to these placement entries are handed out as parameters, both down to the [[Builder]] and from there to the render processes within the engine, but also to external parts within the GUI and in plugins. The actual implementation of these object references is built on top of the PlacementRef tags, thus relying on the PlacementIndex the session maintains to keep track of all placements and their relations. While &mdash; using these references &mdash; an external client can access the objects and structures within the session, any actual ''mutations'' should be done based on the CommandHandling: a single operation of a sequence of operations is defined as [[Command]], to be [[dispatched|ProcDispatcher]] as [[mutation operation|SessionMutation]]. Following this policy ensures integration with the&nbsp;SessionStorage and provides (unlimited) [[UNDO|UndoManager]].
+Objects are attached and manipulated by [[placements|Placement]]; thus the organisation of these placements is part of the session data layout. Effectively, such a placement within the session behaves like an //instances// of a given object, and at the same time it defines the "non-substantial" properties of the object, e.g. its positions and relations. [[References|MObjectRef]] to these placement entries are handed out as parameters, both down to the [[Builder]] and from there to the render processes within the engine, but also to external parts within the GUI and in plugins. The actual implementation of these object references is built on top of the PlacementRef tags, thus relying on the PlacementIndex the session maintains to keep track of all placements and their relations. While &mdash; using these references &mdash; an external client can access the objects and structures within the session, any actual ''mutations'' should be done based on the CommandHandling: a single operation of a sequence of operations is defined as [[Command]], to be [[dispatched|SteamDispatcher]] as [[mutation operation|SessionMutation]]. Following this policy ensures integration with the&nbsp;SessionStorage and provides (unlimited) [[UNDO|UndoManager]].
 
 On the implementation level, there are some interdependencies to consider between the [[data layout|SessionDataMem]], keeping ModelDependencies updated and integrating with the BuildProcess. While the internals of the session are deliberately kept single-threaded, we can't make much assumptions regarding the ongoing render processes.
 
@@ -7211,7 +7158,7 @@ On the implementation level, there are some interdependencies to consider betwee
//Any modification of the session will pass through the [[command system|CommandHandling]].//
-Thus any possible mutation comes in two flavours: a raw operation invoked directly on an object instance attached to the model, and a command taking an MObjectRef as parameter. The latter approach &mdash; invoking any mutation through a command &mdash; will pass the mutations trough the ProcDispatcher to ensure the're logged for [[UNDO|UndoManager]] and executed sequentially, which is important, because the session's internals are //not threadsafe by design.// Thus we're kind of enforcing the use of Commands: mutating operations include a check for a &raquo;permission to mutate&laquo;, which is automatically available within a command execution {{red{TODO as of 2/10}}}. Moreover, the session API and the corresponding LayerSeparationInterfaces expose MObjectRef instances, not raw (language) refs.
+Thus any possible mutation comes in two flavours: a raw operation invoked directly on an object instance attached to the model, and a command taking an MObjectRef as parameter. The latter approach &mdash; invoking any mutation through a command &mdash; will pass the mutations trough the SteamDispatcher to ensure the're logged for [[UNDO|UndoManager]] and executed sequentially, which is important, because the session's internals are //not threadsafe by design.// Thus we're kind of enforcing the use of Commands: mutating operations include a check for a &raquo;permission to mutate&laquo;, which is automatically available within a command execution {{red{TODO as of 2/10}}}. Moreover, the session API and the corresponding LayerSeparationInterfaces expose MObjectRef instances, not raw (language) refs.
 
 !!Questions to solve
 * how to get from the raw mutation to the command?
@@ -7232,7 +7179,7 @@ Interestingly, there seems to be an alternative answer to this question. We coul
 * [[Session]] is largely synonymous to ''Project''
 * there seems to be a new entity called [[Timeline]] which holds the global Pipes
 <<<
-The [[Session]] (sometimes also called //Project// ) contains all information and objects to be edited by the User. Any state within the Proc-Layer is directly or indirectly rooted in the session. It can be saved and loaded. The individual Objects within the Session, i.e. Clips, Media, Effects, are contained in one or multiple collections within the Session, which we call [[sequence(s)|Sequence]]. Moreover, the sesion contains references to all the Media files used, and it contains various default or user defined configuration, all being represented as [[Asset]]. At any given time, there is //only one current session// opened within the application. The [[lifecycle events|SessionLifecycle]] of the session define the lifecycle of ~Proc-Layer as a whole.
+The [[Session]] (sometimes also called //Project// ) contains all information and objects to be edited by the User. Any state within the Steam-Layer is directly or indirectly rooted in the session. It can be saved and loaded. The individual Objects within the Session, i.e. Clips, Media, Effects, are contained in one or multiple collections within the Session, which we call [[sequence(s)|Sequence]]. Moreover, the sesion contains references to all the Media files used, and it contains various default or user defined configuration, all being represented as [[Asset]]. At any given time, there is //only one current session// opened within the application. The [[lifecycle events|SessionLifecycle]] of the session define the lifecycle of ~Steam-Layer as a whole.
 
 The Session is close to what is visible in the GUI. From a user's perspective, you'll find a [[Timeline]]-like structure, containing an [[Sequence]], where various Media Objects are arranged and placed. The available building blocks and the rules how they can be combined together form Lumiera's [[high-level data model|HighLevelModel]]. Basically, besides the [[media objects|MObjects]] there are data connections and all processing is organized around processing chains or [[pipes|Pipe]], which can be either global (in the Session) or local (in real or virtual clips).
 
@@ -7240,7 +7187,7 @@ The Session is close to what is visible in the GUI. From a user's perspective, y
 For larger editing projects the simple structure of a session containing "the" timeline is not sufficient. Rather
 * we may have several [[sequences|Sequence]], e.g. one for each scene. These sequences can be even layered or nested (compositional work).
 * within one project, there may be multiple, //independant Timelines// &mdash; each of which may have an associated Viewer or Monitor
-Usually, when working with this stucture, you'll drill down starting from a timeline, trough a (top-level) sequence, down into a fork ("track"), a clip, maybe even a embedded Sequence (VirtualClip), and from there even more down into a single attached effect. This constitutes a set of [[nested scopes|PlacementScope]]. Operations are to be [[dispatched|ProcDispatcher]] through a [[command system|CommandHandling]], including the target object [[by reference|MObjectRef]]. [[Timelines|Timeline]] on the other hand are always top-level objects and can't be combined further. You can render a single given timeline to output.
+Usually, when working with this stucture, you'll drill down starting from a timeline, trough a (top-level) sequence, down into a fork ("track"), a clip, maybe even a embedded Sequence (VirtualClip), and from there even more down into a single attached effect. This constitutes a set of [[nested scopes|PlacementScope]]. Operations are to be [[dispatched|SteamDispatcher]] through a [[command system|CommandHandling]], including the target object [[by reference|MObjectRef]]. [[Timelines|Timeline]] on the other hand are always top-level objects and can't be combined further. You can render a single given timeline to output.
 &rarr; see [[Relation of Project, Timelines and Sequences|TimelineSequences]]
 
 !!!the definitive state
@@ -7263,7 +7210,7 @@ It will contain a global video and audio out pipe, just one timeline holding a s
 
-
Within Lumiera's Proc-Layer, there are some implementation facilities and subsystems needing more specialised access to implementation services provided by the session. Thus, besides the public SessionInterface and the [[lifecycle and state management API|SessionManager]], there are some additional service interfaces exposed by the session through a special access mechanism. This mechanism needs to be special in order to assure clean transactional behaviour when the session is opened, closed, cleared or loaded. Of course, there is the additional requirement to avoid direct dependencies of the mentioned Proc internals on session implementation details.
+
Within Lumiera's Steam-Layer, there are some implementation facilities and subsystems needing more specialised access to implementation services provided by the session. Thus, besides the public SessionInterface and the [[lifecycle and state management API|SessionManager]], there are some additional service interfaces exposed by the session through a special access mechanism. This mechanism needs to be special in order to assure clean transactional behaviour when the session is opened, closed, cleared or loaded. Of course, there is the additional requirement to avoid direct dependencies of the mentioned Proc internals on session implementation details.
 
 !Accessing session services
 For each of these services, there is an access interface, usually through an class with only static methods. Basically this means access //by name.//
@@ -7305,22 +7252,22 @@ And last but not least: the difficult part of this whole concept is encapsulated
 {{red{WIP ... draft}}}
-
//A subsystem within Proc-Layer, responsible for lifecycle and access to the editing [[Session]].//
+
//A subsystem within Steam-Layer, responsible for lifecycle and access to the editing [[Session]].//
 [img[Structure of the Session Subsystem|uml/Session-subsystem.png]]
 
 !Structure
-The ProcDispatcher is at the heart of the //Session Subsystem.// Because the official interface for working on the session, the [[SessionCommand façade|SessionCommandFacade]], is expressed in terms of sending command messages to invoke predefined [[commands|CommandHandling]] to operate on the SessionInterface, the actual implementation of such a {{{SessionCommandService}}} needs a component actually to enqueue and dispatch those commands -- which is the {{{DispatcherLoop}}} within the ProcDispatcher. As usual, the ''lifecycle'' is controlled by a subsystem descriptor, which starts and stops the whole subsystem; and this starting and stopping in turn translates into starting/stopping of the dispatcher loop. On the other hand, //activation of the dispatcher,// which means actively to dispatch commands, is controlled by the lifecycle of the session proper. The latter is just a data structure, and can be loaded / saved and rebuilt through the ''session manager''.
+The SteamDispatcher is at the heart of the //Session Subsystem.// Because the official interface for working on the session, the [[SessionCommand façade|SessionCommandFacade]], is expressed in terms of sending command messages to invoke predefined [[commands|CommandHandling]] to operate on the SessionInterface, the actual implementation of such a {{{SessionCommandService}}} needs a component actually to enqueue and dispatch those commands -- which is the {{{DispatcherLoop}}} within the SteamDispatcher. As usual, the ''lifecycle'' is controlled by a subsystem descriptor, which starts and stops the whole subsystem; and this starting and stopping in turn translates into starting/stopping of the dispatcher loop. On the other hand, //activation of the dispatcher,// which means actively to dispatch commands, is controlled by the lifecycle of the session proper. The latter is just a data structure, and can be loaded / saved and rebuilt through the ''session manager''.
 
 !Lifecycle
 As far as lifecycle is concerned, the »session subsystem« has to be distinguished from the //session proper,// which is just a data structure with its own, separate lifecycle considerations. Accessing the session data only makes sense when this data structure is fully loaded, while the //session subsystem,// deals with performing commands on the session and with triggering the builder runs.
 
 !!!start-up
-The session subsystem lifecycle translates into method invocations on the {{{ProcDispatcher}}}, which in turn manages the parts actually implementing the session command processing and builder operations. This relation is expressed by holding onto the implementation as a //~PImpl.// As long as the {{{DispatcherLoop}}} object exists, the session subsystem can be considered in //running state.// This is equivalent to the following
+The session subsystem lifecycle translates into method invocations on the {{{SteamDispatcher}}}, which in turn manages the parts actually implementing the session command processing and builder operations. This relation is expressed by holding onto the implementation as a //~PImpl.// As long as the {{{DispatcherLoop}}} object exists, the session subsystem can be considered in //running state.// This is equivalent to the following
 * the ''session loop thread'' is spawned. This thread performs all of the session and builder operations (single-threaded).
 * the {{{SessionCommandService}}} is started and connected as implementation of the {{{SessionCommand}}} façade.
 
 !!!shutdown
-Shutdown is initiated by sending a message to the dispatcher loop. This causes the internal loop control to wake up and leave the loop, possibly after finishing a command or builder run currently in progress. When leaving the loop, the {{{sigTerm}}} of the SessionSubsystem is invoked, which then in turn causes the {{{DispatcherLoop}}} object to be deleted and the ProcDispatcher thus returned into halted state. 
+Shutdown is initiated by sending a message to the dispatcher loop. This causes the internal loop control to wake up and leave the loop, possibly after finishing a command or builder run currently in progress. When leaving the loop, the {{{sigTerm}}} of the SessionSubsystem is invoked, which then in turn causes the {{{DispatcherLoop}}} object to be deleted and the SteamDispatcher thus returned into halted state. 
 
@@ -7346,11 +7293,60 @@ Shutdown is initiated by sending a message to the dispatcher loop. This causes t * in a future version, it may also encapsulate the communication in a distributed render farm
-
-
The architecture of the Lumiera application separates functionality into three Layers: __GUI__, __Proc__ and __Backend__.
+
+
The architecture of the Lumiera application separates functionality into three Layers: __Stage__, __Steam__ and __Vault__.
 
-While the Backend is responsible for Data access and management and for carrying out the computation intensive media opteratons, the middle Layer or ~Proc-Layer contains [[assets|Asset]] and [[Session]], i.e. the user-visible data model and provides configuration and behaviour for these entities. Besides, he is responsible for [[building and configuring|Builder]] the [[render engine|RenderEngine]] based on the current Session state.
-&rarr; UI-Layer
+The Steam-Layer as the middle layer transforms the structures of the usage domain into structures of the technical implementation domain, which can be processed efficiently with contemporary media processing frameworks. While the VaultLayer is responsible for Data access and management and for carrying out the computation intensive media opterations, the Steam-Layer contains [[assets|Asset]] and [[Session]], i.e. the user-visible data model and provides configuration and behaviour for these entities. Besides, he is responsible for [[building and configuring|Builder]] the [[render engine|RenderEngine]] based on the current Session state. Furthermore, the [[Player]] subsystem, which coordinates render and playback operations, can be seen to reside at the lower boundary of Steam-Layer. +&rarr; [[Session]] +&rarr; [[Player]] +&rarr; UI-Layer +&rarr; VaultLayer +
+
+
+
//The guard and coordinator of any operation within the session subsystem.//
+The session and related components work effectively single threaded. Any tangible operation on the session data structure has to be enqueued as [[command|CommandHandling]] into the dispatcher. Moreover, the [[Builder]] is triggered from the SteamDispatcher; and while the Builder is running, any command processing is halted. The Builder in turn creates or reshapes the processing nodes network, and the changed network is brought into operation with a //transactional switch// -- while render processes on this processing network operate unaffected and essentially multi-threaded.
+
+Enqueueing commands through the SessionCommandFacade into the SteamDispatcher is the official way to cause changes to the session. And the running state of the SteamDispatcher is equivalent with the running state of the //session subsystem as a whole.//
+
+!Requirements
+To function properly as action coordinator of the session subsystem, the dispatcher has to fulfil multiple demands
+;enqueue
+:accept and enqueue command messages concurrently, any time, without blocking the caller
+:*FIFO for //regular commands//
+:*LIFO for //priority requests//  {{red{unimplemented 1/17}}}
+;process
+:dequeue and process entries sequentially
+;sleep
+:work continuously until queue is empty, then enter wait state
+;check point
+:arrive at a well defined check point reliably non blocking ("ensure to make progress")
+:* necessary to know when internal state is consistent
+:* when?
+:** after each command
+:** after builder run
+:** after wake-up
+;manage
+:care for rectifying entries in the queue
+:* ensure they //match// current session, discard obsoleted requests
+:* //aggregate// similar requests
+:* //supersede// by newer commands of a certain kind
+
+!Operational semantics
+The SteamDispatcher is a component with //running state.// There is some kind of working loop, which possibly enters a sleep state when idle. In fact, this loop is executed ''exclusively in the session thread''. This is the very essence of treating the session entirely single threaded, thus evading all the complexities of parallelism. Consequently, the session thread will either
+* execute a command on the session
+* perform the [[Builder]]
+* evaluate loop control logic in the SteamDispatcher
+* block waiting in the SteamDispatcher
+
+Initially the command queue is empty and the SteamDispatcher can be considered idle. Whenever more commands are available in the queue, the dispatcher will handle them one after another, without delay, until the queue is emptied. Yet the Builder run need to be kept in mind. Essentially, the builder models a //dirty state:// whenever a command has touched the session, the corresponding LowLevelModel must be considered out of sync, possibly not reflecting the intended semantics of the session anymore. From a strictly logical view angle, we'd need to trigger the builder after each and every session command -- but it was a very fundamental design decision in Lumiera to allow for a longer running build process, more akin to running a compiler. This decision opens all the possibilities of integrating a knowledge based system and resolution activities to find a solution to match the intended session semantics. For this reason, we decouple the UI actions from session and render engine consistency, and we enqueue session commands, to throttle down the number of builder runs.
+
+So the logic to trigger builder runs has to take some leeway into account. Due to the typical interactive working style of an editing application, session commands might be trickling in in strikes of similar commands, intermingled with tiny pauses. For this reason, the SteamDispatcher implements some //hysteresis,// as far as triggering the builder runs is concerned. The builder is fired in idle state, but only after passing some //latency period.// On the other hand, massive UI activities (especially during a builder run) may have flooded the queue, thus sending the session into an extended period of command processing. From the user's view angle, the application looks non responsive in such a case, albeit not frozen, since the UI can still enqueue further commands and thus retains the ability to react locally on user interaction. To mitigate this problem, the builder should be started anyway after some extended period of command processing, even if the queue is not yet emptied. Each builder run produces a structural diff message sent towards the UI and thus causes user visible changes within the session's UI representation. This somewhat stuttering response conveys to the user a tangible sensation of ongoing activity, while communicating at the same time, at least subconsciously some degree of operational overload. {{red{note 12/2016 builder is not implemented, so consider this planning}}}
+
+Any change to the circumstances determining the SteamDispatcher's behaviour needs to be imparted actively through the public interface -- the dispatcher is not designed to be a state listener or observer. Any such state change notifications are synchronised and cause a wakeup notification to the session thread. For this purpose, enqueuing of further commands counts as state change and is lock protected. Beyond that, any other activities, like //processing// of commands or builder runs, are performed within the session thread without blocking other threads; the locking on the SteamDispatcher is only ever short term to ensure consistent internal state. Clients need to be prepared for the effect of actions to appear asynchronously and with some delay. Especially this means that session switch or shutdown has to await completion of any session command or builder run currently in progress.
+
+When the session is closed or dismantled, further processing in the SteamDispatcher will be disabled, after completing the current command or builder run. This disabled state can be reversed when a new session instance becomes operative. And while the dispatcher will then continue to empty the command queue, most commands in queue will probably be obsoleted and dropped, because of referring to a deceased session instance. Moreover, the lifecycle of the session instances has to be distinguished from the lifecycle of the SessionSubsystem as such. When the latter is terminated, be it by a fatal error in some builder run, or be it due to general shutdown of the application, the SteamDispatcher will be asked to terminate the session thread after completing the current activity in progress. Such an event will also discard any further commands waiting in the dispatcher's queue.
+
Conversion of a media stream into a stream of another type is done by a processor module (plugin). The problem of finding such a module is closely related to the StreamType and especially [[problems of querying|StreamTypeQuery]] for such. (The builder uses a special Facade, the ConManager, to access this functionality). There can be different kinds of conversions, and the existance or non-existance of such an conversion can influence the stream type classification.
@@ -7395,7 +7391,7 @@ Media types vary largely and exhibit a large number of different properties, whi
 A stream type is denoted by a StreamTypeID, which is an identifier, acting as an unique key for accessing information related to the stream type. It corresponds to an StreamTypeDescriptor record, containing an &mdash; //not necessarily complete// &mdash; specification of the stream type, according to the classification detailed below.
 
 !! Classification
-Within the Proc-Layer, media streams are treated largely in a similar manner. But, looking closer, not everything can be connected together, while on the other hand there may be some classes of media streams which can be considered //equivalent// in most respects. Thus separating the distinction between various media streams into several levels seems reasonable...
+Within the Steam-Layer, media streams are treated largely in a similar manner. But, looking closer, not everything can be connected together, while on the other hand there may be some classes of media streams which can be considered //equivalent// in most respects. Thus separating the distinction between various media streams into several levels seems reasonable...
 * Each media belongs to a fundamental ''kind'' of media, examples being __Video__, __Image__, __Audio__, __MIDI__, __Text__,... <br/>Media streams of different kind can be considered somewhat "completely separate" &mdash; just the handling of each of those media kinds follows a common //generic pattern// augmented with specialisations. Basically, it is //impossible to connect// media streams of different kind. Under some circumstances there may be the possibility of a //transformation// though. For example, a still image can be incorporated into video, sound may be visualized, MIDI may control a sound synthesizer.
 * Below the level of distinct kinds of media streams, within every kind we have an open ended collection of ''prototypes'', which, when compared directly, may each be quite distinct and different, but which may be //rendered//&nbsp; into each other. For example, we have stereoscopic (3D) video and we have the common flat video lacking depth information, we have several spatial audio systems (Ambisonics, Wave Field Synthesis), we have panorama simulating sound systems (5.1, 7.1,...), we have common stereophonic and monaural audio. It is considered important to retain some openness and configurability within this level of distinction, which means this classification should better be done by rules then by setting up a fixed property table. For example, it may be desirable for some production to distinguish between digitized film and video NTSC and PAL, while in another production everything is just "video" and can be converted automatically. The most noticeable consequence of such a distinction is that any Bus or [[Pipe]] is always limited to a media stream of a single prototype. (&rarr; [[more|StreamPrototype]])
 * Besides the distinction by prototypes, there are the various media ''implementation types''. This classification is not necessarily hierarchically related to the prototype classification, while in practice commonly there will be some sort of dependency. For example, both stereophonic and monaural audio may be implemented as 96kHz 24bit PCM with just a different number of channel streams, but we may as well get a dedicated stereo audio stream with two channels multiplexed into a single stream. For dealing with media streams of various implementation type, we need //library// routines, which also yield a //type classification system.// Most notably, for raw sound and video data we use the [[GAVL]] library, which defines a classification system for buffers and streams.
@@ -7487,7 +7483,7 @@ Independent from these is __another Situation__ where we query for a type ''by I
 
-
Questions regarding the use of StreamType within the Proc-Layer.
+
Questions regarding the use of StreamType within the Steam-Layer.
 * what is the relation between Buffer and Frame?
 * how to get the required size of a Buffer?
 * who does buffer allocations and how?
@@ -7752,12 +7748,12 @@ The arrangement of subsystems is part of the architecture; separation into subsy
 !Layers and Subsystems
 ;GUI
 :the actual user interface is loaded and started as a plug-in. It is typically monolithic and thus counts as //one// subsystem (but there might be several alternative interfaces)
-;~Proc-Layer
+;~Steam-Layer
 :this is the metadata and organisational layer and serves to accomodate the user oriented view to the technical necessities of rendering and playback
 :* SessionSubsystem, comprised of the [[Session(datastructure)|Session]] and the [[Builder]]
 :* [[Player]]
 :* RenderEngine
-;Backend
+;Vault
 :here the goal is to provide system level services for the upper layers. The structuring is not so clear yet {{red {1/2014}}}
 :* probably the [[Scheduler]] gets the ability to be started explicitly
 :* the other services are started in conjunction and form a common subsystem
@@ -8786,9 +8782,9 @@ function addKeyDownHandlers(e)
 
 
-
+
The Name of the Software driving this Wiki. Is is written completely in ~JavaScript and contained in one single HTML page.
-Thus no server and no network connection is needed. Simply open the file in your browser and save changes locally. As the [[Engine/Development TiddlyWiki|SteamLayer]] HTML is located in the Lumiera source tree, all changes will be managed and distributed via GIT. While doing so, you sometimes will have to merge conflicing changes manually in the HTML source.
+Thus no server and no network connection is needed. Simply open the file in your browser and save changes locally. As the [[Engine/Development TiddlyWiki|CoreDevelopment]] HTML is located in the Lumiera source tree, all changes will be managed and distributed via GIT. While doing so, you sometimes will have to merge conflicing changes manually in the HTML source.
  * see GettingStarted
  * see [[Homepage|http://tiddlywiki.org]], [[Wiki-Markup|http://tiddlywiki.org/#Markup]], [[CSS-formatting|http://tiddlywiki.org/#%5B%5BCSS%20Formatting%5D%5D]]
 
@@ -8939,7 +8935,7 @@ __tagged values__: quantised values are explicitly created out of continuous val __delayed quantisation__: with this approach, the information loss is delayed as long as possible. Quantised time values are rather treated as promise for quantisation, while the actual time data remains unaltered. Additionally, they carry a tag, or even a direct link to the responsible quantiser instance. Effectively, these are specialised time values, instances of a sub-concept, able to stand-in for general time values, but exposing additional accessors to get a quantised value. !!!discussion -For Lumiera, the static typing approach is of limited value -- it excels when values belonging to different scales are actually treated differently. There are such cases, but rather on the data handling level, e.g. sound samples are always handled block wise. But regarding time values, the unifying aspect is more important, which leads to prefering a dynamic (run time typed) approach, while //erasing// the special differences most of the time. Yet the dynamic and open nature of the Lumiera high-level model favours the delayed quantisation pattern; the same values may require different quantisation depending on the larger model context an object is encountered in. This solution might be too general and heavy weight at times though. Thus, for important special cases, the accessors should return tagged values, preferably even with differing static type. Time codes can be integrated this way, but most notably the ''frame numbers'' used for addressing throughout the backend, can be implemented as such specifically typed tagged values; the tag here denotes the quantiser and thus the underlying grid -- it should be implemented as hash-ID for smooth integration with code written in plain C. +For Lumiera, the static typing approach is of limited value -- it excels when values belonging to different scales are actually treated differently. There are such cases, but rather on the data handling level, e.g. sound samples are always handled block wise. But regarding time values, the unifying aspect is more important, which leads to prefering a dynamic (run time typed) approach, while //erasing// the special differences most of the time. Yet the dynamic and open nature of the Lumiera high-level model favours the delayed quantisation pattern; the same values may require different quantisation depending on the larger model context an object is encountered in. This solution might be too general and heavy weight at times though. Thus, for important special cases, the accessors should return tagged values, preferably even with differing static type. Time codes can be integrated this way, but most notably the ''frame numbers'' used for addressing throughout the Vault, can be implemented as such specifically typed tagged values; the tag here denotes the quantiser and thus the underlying grid -- it should be implemented as hash-ID for smooth integration with code written in plain C. At the level of individual timecode formats, we're lacking a common denominator; thus it is preferrable to work with different concrete timecode classes through //generic programming.// This way, each timecode format can expose operations specific only to the given format. Especially, different timecode formats expose different //component fields,// modelled by the generic ''Digxel'' concept. There is a common baseclass ~TCode though, which can be used as marker or for //type erasure.// &rarr; more on [[usage situations|TimeUsage]] @@ -9099,7 +9095,7 @@ As stated in the [[definition|Timeline]], a timeline refers to exactly one seque This is because the top-level entities (Timelines) are not permitted to be combined further. You may play or render a given timeline, you may even play several timelines simultaneously in different monitor windows, and these different timelines may incorporate the same sequence in a different way. The Sequence just defines the relations between some objects and may be placed relatively to another object (clip, label,...) or similar reference point, or even anchored at an absolute time if desired. In a similar open fashion, within the track-tree of a sequence, we may define a specific signal routing, or we may just fall back to automatic output wiring. !Attaching output -The Timeline owns a list of global [[pipes (busses)|Pipe]] which are used to collect output. If the track tree of a sequence doesn't contain specific routing advice, then connections will be done directly to these global pipes in order and by matching StreamType (i.e. typically video to video master, audio to stereo audio master). When a monitor (viewer window) is attached to this timeline, similar output connections are made from those global pipes, i.e. the video display will take the contents of the first video (master) bus, and the first stereo audio pipe will be pulled and sent to system audio out. The timeline owns a ''play control'' shared by all attached viewers and coordinating the rendering-for-viewing. Similarly, a render task may be attached to the timeline to pull the pipes needed for a given kind of generated output. The actual implementation of the play controller and the coordination of render tasks is located in the Backend, which uses the service of the Proc-Layer to pull the respective exit nodes of the render engine network. +The Timeline owns a list of global [[pipes (busses)|Pipe]] which are used to collect output. If the track tree of a sequence doesn't contain specific routing advice, then connections will be done directly to these global pipes in order and by matching StreamType (i.e. typically video to video master, audio to stereo audio master). When a monitor (viewer window) is attached to this timeline, similar output connections are made from those global pipes, i.e. the video display will take the contents of the first video (master) bus, and the first stereo audio pipe will be pulled and sent to system audio out. The timeline owns a ''play control'' shared by all attached viewers and coordinating the rendering-for-viewing. Similarly, a render task may be attached to the timeline to pull the pipes needed for a given kind of generated output. The actual implementation of the play controller and the coordination of render tasks is located in the Vault, which uses the service of the Steam-Layer to pull the respective exit nodes of the render engine network. !Timeline versus Timeline View Actually, what the [[GUI creates and uses|GuiTimelineView]] is the //view// of a given timeline. This makes no difference to start with, as the view is modelled to be a sub-concept of "timeline" and thus can stand-in. All different views of the //same// timeline also share one single play control instance, i.e. they all have one single playhead position. Doing it this way should be the default, because it's the least confusing. Anyway, it's also possible to create multiple //independent timelines// &mdash; in an extreme case even so when referring to the same top-level sequence. This configuration gives the ability to play the same arrangement in parallel with multiple independent play controllers (and thus independent playhead positions) @@ -9814,7 +9810,7 @@ see [[implementation planning|TypedLookup]]
-
TypedID is a registration service to associate object identities, symbolic identifiers and types. It acts as frontend to the TypedLookup system within Proc-Layer, at the implementation level. While TypedID works within a strictly typed context, this type information is translated into an internal index on passing over to the implementation, which manages a set of tables holding base entries with a combined symbolic+hash ID, plus an opaque buffer. Thus, the strictly typed context is required to re-access the stored data. But the type information wasn't erased entirely, so this typed context can be re-gained with the help of an internal type index. All of this is considered implementation detail and may be subject to change without further notice; any access is assumed to happen through the TypedID frontend. Besides, there are two more specialised frontends.
+
TypedID is a registration service to associate object identities, symbolic identifiers and types. It acts as frontend to the TypedLookup system within Steam-Layer, at the implementation level. While TypedID works within a strictly typed context, this type information is translated into an internal index on passing over to the implementation, which manages a set of tables holding base entries with a combined symbolic+hash ID, plus an opaque buffer. Thus, the strictly typed context is required to re-access the stored data. But the type information wasn't erased entirely, so this typed context can be re-gained with the help of an internal type index. All of this is considered implementation detail and may be subject to change without further notice; any access is assumed to happen through the TypedID frontend. Besides, there are two more specialised frontends.
 
 !Front-ends
 * TypedID uses static but templated access functions, plus an singleton instance to manage a ~PImpl pointing to the ~TypedLookup table
@@ -9890,7 +9886,7 @@ The UI-Bus has a star shaped topology, with a central "bus master" hub
 ;act
 :send a GenNode representing the action
 :* a command prototype corresponding to the message's ID is cloned and outfitted with actual parameter values
-:* the resulting command instance is handed over to the ProcDispatcher for execution
+:* the resulting command instance is handed over to the SteamDispatcher for execution
 ;note
 :send a GenNode representing the //state mark//
 :some (abstracted) [[presentation state|PresentationState]] manager is expected to listen to these messages, possibly recording state to be restored later
@@ -9907,15 +9903,15 @@ As a starting point, we know
 * the latter is somehow related to the [[UI-model|GuiModel]] (one impersonates or represents the other)
 * each {{{gui::model::Tangible}}} has a ''bus-terminal'', which is linked to the former's identity
 * it is possible to wire ~SigC signals so to send messages via this terminal into the UI-Bus
-* these messages translate into command invocations towards the Proc-Layer
-* Proc-Layer responds asynchroneously with a diff message
+* these messages translate into command invocations towards the Steam-Layer
+* Steam-Layer responds asynchroneously with a diff message
 * the GuiModel translates this into notifications of the top level changed elements
 * these in turn request a diff and then update themselves into compliance.
 
 !Behaviours
 For some arbitrary reason, any element in the UI can appear and go away. This corresponds to attachment and deregistration at the UI-Bus
 
-In regular, operative state, an interface element may initiate //actions.// These correspond to invocation of [[Proc-Layer commands|CommandHandling]], after having supplied the necessary arguments. Commands are referred and called //by name// (command-ID) and they are thus not bound to a specific UI-Element. Rather, it is the job of the UI to provide sensible command arguments, based on the context of the operation. There might be higher-level, cooperative [[gestures|Gesture]] within the interface, and actions might be formed like sentences, with the help of a FocusConcept -- however, at the the end of the day, there is a ''subject'' and a ''predicate''. And the interface element takes on the role of the underlying, the subject, the ''tangible''.
+In regular, operative state, an interface element may initiate //actions.// These correspond to invocation of [[Steam-Layer commands|CommandHandling]], after having supplied the necessary arguments. Commands are referred and called //by name// (command-ID) and they are thus not bound to a specific UI-Element. Rather, it is the job of the UI to provide sensible command arguments, based on the context of the operation. There might be higher-level, cooperative [[gestures|Gesture]] within the interface, and actions might be formed like sentences, with the help of a FocusConcept -- however, at the the end of the day, there is a ''subject'' and a ''predicate''. And the interface element takes on the role of the underlying, the subject, the ''tangible''.
 
 Some actions are very common and can be represented by a shorthand. An example would be to tweak some property, which means to mutate the attribute of a model element known beforehand. Such tweaks are often caused by direct interaction, and thus have the tendency to appear in flushes, which might be batched to remove some load from the lower layers.
 
@@ -9962,7 +9958,7 @@ Anyway, whenever such a relevant change happens -- consider e.g. the presentatio
 While the above definitions might seem more or less obvious and reasonable, there is one tiny detail, which -- on second thought -- unfolds into a fundamental decision to be taken. The point in question is //how we refer to a command.// More specifically: is referring to a command something generic, or is it rather something left to the actual implementing widget? In the first case, a generic foundation element has to provide some framework to deal with command definitions, whereas in the second case just a protected slot to pass on invocations from derived classes would be sufficient. This is a question of fundamental importance; subsidiarity has its merits, so once we forgo the opportunity to build from a generic pattern, local patterns will take over, while similarities and symmetries have to grow and wait to be discovered sometimes, if at all. This might actually not be a problem -- yet if you know Lumiera, you know that we tend to look at existing practice and draw fundamental conclusions, prior to acting.
 &rarr; InteractionControl
 &rarr; GuiCommandBinding
-&rarr; [[Command handling (Proc-Layer)|CommandHandling]]
+&rarr; [[Command handling (Steam-Layer)|CommandHandling]]
 
 !!!actual implementation of command invocation
 In the simple standard case, an UI event (like pressing a button) leads directly to invocation of a known command with locally known arguments. In such cases, the command can be triggered right away, using the nearest UI-Bus connection available. But there are more complicated cases, where invoking the command happens as a result of user interaction, and some of the actual arguments need to be picked up from the current context by suitable match. To deal with such cases, the InteractionState helper is used to pick up this contextual data.
@@ -9980,9 +9976,9 @@ The dispatch of //diff messages// is directly integrated into the UI-Bus -- whic
 
-
The architecture of the Lumiera application separates functionality into three Layers: __GUI__, __Proc__ and __Backend__.
+
The architecture of the Lumiera application separates functionality into three Layers: __Stage__, __Steam__ and __Vault__.
 
-The Graphical User interface, the upper layer in this hierarchy, embodies everything of tangible relevance to the user working with the application. The interplay with Proc-Layer, the middle layer below the UI, is organised along the distinction between two realms of equal importance: on one side, there is the immediate //mechanics of the interface,// which is implemented directly within the ~UI-Layer, based on the Graphical User Interface Toolkit. And, on the other side, there are those //core concerns of working with media,// which are cast into the HighLevelModel at the heart of the middle layer.
+The Graphical User interface, the upper layer in this hierarchy, embodies everything of tangible relevance to the user working with the application. The interplay with Steam-Layer, the middle layer below the UI, is organised along the distinction between two realms of equal importance: on one side, there is the immediate //mechanics of the interface,// which is implemented directly within the ~UI-Layer, based on the Graphical User Interface Toolkit. And, on the other side, there are those //core concerns of working with media,// which are cast into the HighLevelModel at the heart of the middle layer.
//A topological addressing scheme to designate structural locations within the UI.//
@@ -10177,7 +10173,7 @@ Design is an experiment to find out how things are related. We can't //plan// a
 
 !!!!Performance Considerations
 * within the Engine the Render Nodes are containing the ''inner loop'', whose contents are to be executed hundred thousands to million times per frame. Every dispensable concern, which is not strictly necessary to get the job done, is worth the effort of factoring out here.
-* performance pressure at the builder level is far lower, albeit still existent. Compared to the effort of calculating a single processing step, looping even over some hundred nodes and executing quite some logic is negligible. Danger bears on creating memory pressure or causing awkward execution patterns (in the backend) rather. So the main concern should be the ability of reconfiguring different aspects separately without much effort. If for example a given render strategy works out to create lots of deadlocks and waitstates in the backend, the design should account for the possibility to exchange it with another strategy without having to modify the inner workings of the build process.<br>On the other hand, I wouldn't be overly concerned to trigger the build process yet another time to get some specific problem solved. However, the possibility to share one Render configuration for, say, 20 sec of video, instead of triggering the build process 500 times for every frame in this timespan, would sure be worth considering if it's not overly complicated to achieve.
+* performance pressure at the builder level is far lower, albeit still existent. Compared to the effort of calculating a single processing step, looping even over some hundred nodes and executing quite some logic is negligible. Danger bears on creating memory pressure or causing awkward execution patterns (in the Vault) rather. So the main concern should be the ability of reconfiguring different aspects separately without much effort. If for example a given render strategy works out to create lots of deadlocks and waitstates in the Vault, the design should account for the possibility to exchange it with another strategy without having to modify the inner workings of the build process.<br>On the other hand, I wouldn't be overly concerned to trigger the build process yet another time to get some specific problem solved. However, the possibility to share one Render configuration for, say, 20 sec of video, instead of triggering the build process 500 times for every frame in this timespan, would sure be worth considering if it's not overly complicated to achieve.
 * contrary to this, the session level is harmless with respect to performance. Getting acceptable responsiveness on user interactions is sufficient. We could consider using high level languages here, for it is much more important being able to express and handle complicated object relationships with relative ease. The only (indirect) concern is to avoid generating memory pressure inadvertently. Edit actions generating memory peaks could interfere with an ongoing background render process. If we decide to use lots of relation objects or transient objects, we should use an object pool or still better an garbage collector.
 
 !!!!Concepts and Interfaces
@@ -10201,7 +10197,7 @@ The &raquo;current setup&laquo; of the objects in the session is sort of
 
-
within Lumiera's ~Proc-Layer, on the conceptual level there are two kinds of connections: data streams and control connections. The wiring deals with how to define and control these connections, how they are to be detected by the builder and finally implemented by links in the render engine.
+
within Lumiera's ~Steam-Layer, on the conceptual level there are two kinds of connections: data streams and control connections. The wiring deals with how to define and control these connections, how they are to be detected by the builder and finally implemented by links in the render engine.
 &rarr; see OutputManagement
 &rarr; see OutputDesignation
 &rarr; see OperationPoint