Invocation: code clean-up and documentation
Remove left-overs from the preceding prototypical implementation, which is now obliterated by the change to a flexibly configured `FeedManifold` with structured, typed storage for buffers and for parameter data. The Render Node invocation sequence, as rearranged and reworked for the »Playback Vertical Slice«, now seems reasonably clear and settled. Adding extensive documentation to describe the conventions and structures worked out thus far; moreover, start makeover of old documentation in the !TiddlyWiki to remove concepts obviously obsoleted now...
This commit is contained in:
parent
e46ff7a8a7
commit
81ef3c62e9
15 changed files with 526 additions and 664 deletions
|
|
@ -17,7 +17,6 @@ On occasion, we'll replace them by drawings from our current UML model.
|
|||
128901: classdiagram 129285 "Builder Tool (Visitor)"
|
||||
128901: activitydiagram 129413 "build flow"
|
||||
129029: activitydiagram 129541 "the render configuration flow"
|
||||
128389: classdiagram 129669 "Automation Entities"
|
||||
128645: deploymentdiagram 129797 "Source Overview"
|
||||
128005: componentdiagram 130053 "proc-components"
|
||||
128517: classdiagram 130181 "Hierarchy"
|
||||
|
|
@ -32,9 +31,6 @@ On occasion, we'll replace them by drawings from our current UML model.
|
|||
131077: componentdiagram 131589 "components"
|
||||
131077: usecasediagram 131717 "when to query"
|
||||
131077: collaborationdiagram 131845 "\"default\" object"
|
||||
128389: classdiagram 131973 "Render Mechanics"
|
||||
129285: collaborationdiagram 132229 "Render Process"
|
||||
128389: classdiagram 132357 "StateAdapter composition"
|
||||
128517: classdiagram 132485 "Stream Type Framework"
|
||||
128005: classdiagram 132741 "TimelineSequences"
|
||||
128901: classdiagram 132868 "Builder Entities"
|
||||
|
|
|
|||
Binary file not shown.
|
Before Width: | Height: | Size: 34 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 20 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 30 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 21 KiB |
|
|
@ -57,20 +57,15 @@
|
|||
** - essentially, FeedManifold is structured storage with some default-wiring.
|
||||
** - the trait functions #hasInput() and #hasParam() should be used by downstream code
|
||||
** to find out if some part of the storage is present and branch accordingly...
|
||||
** @todo 12/2024 figure out how constructor-arguments can be passed flexibly
|
||||
** @remark in the first draft version of the Render Engine from 2009/2012, there was an entity
|
||||
** called `BuffTable`, which however provided additional buffer-management capabilities.
|
||||
** This name describes well the basic functionality, which can be hard to see with all
|
||||
** the additional meta-programming related to the flexible functor signature. When it
|
||||
** comes to actual invocation, we collect input buffers from predecessor nodes and
|
||||
** we prepare output buffers, and then we pass both to a processing function.
|
||||
**
|
||||
** @todo WIP-WIP 12/2024 now about to introduce support for arbitrary parameters into the
|
||||
** Render-Engine code, which has been reworked for the »Playback Vertical Slice«.
|
||||
** We have still to reach the point were the engine becomes operational!!!
|
||||
** @see NodeBase_test
|
||||
** @see weaving-pattern-builder.hpp
|
||||
** @see lib::meta::ElmTypes in variadic-helper.hpp : uniform processing of »tuple-like« data
|
||||
** @see \ref lib::meta::ElmTypes in variadic-helper.hpp "uniform processing of »tuple-like« data"
|
||||
*/
|
||||
|
||||
|
||||
|
|
@ -88,13 +83,10 @@
|
|||
#include "lib/meta/variadic-helper.hpp"
|
||||
#include "lib/meta/generator.hpp"
|
||||
#include "lib/test/test-helper.hpp"
|
||||
//#include "lib/several.hpp"
|
||||
|
||||
#include <tuple>
|
||||
|
||||
|
||||
////////////////////////////////TICKET #826 12/2024 the invocation sequence has been reworked and reoriented for integration with the Scheduler
|
||||
|
||||
namespace steam {
|
||||
namespace engine {
|
||||
|
||||
|
|
@ -226,7 +218,6 @@ namespace engine {
|
|||
, SLOT_I = _Case<Sig>::SLOT_I
|
||||
, SLOT_O = _Case<Sig>::SLOT_O
|
||||
, SLOT_P = 0
|
||||
, MAXSZ = std::max (uint(FAN_I), uint(FAN_O)) /////////////////////OOO required temporarily until the switch to tuples
|
||||
};
|
||||
|
||||
static constexpr bool hasInput() { return SLOT_I != SLOT_O; }
|
||||
|
|
@ -239,9 +230,6 @@ namespace engine {
|
|||
static_assert (is_BuffSlot<SLOT_I>, "Input slot of the function must accept buffer pointers");
|
||||
static_assert (is_ParamSlot<SLOT_P> or not hasParam()
|
||||
,"Param slot must accept value data");
|
||||
|
||||
using BuffO = typename ArgO::List::Head;
|
||||
using BuffI = typename std::conditional<hasInput(), typename ArgI::List::Head, BuffO>::type; /////////////////////////TODO obsolete ... remove after switch
|
||||
};
|
||||
|
||||
|
||||
|
|
@ -304,9 +292,11 @@ namespace engine {
|
|||
}//(End)Introspection helpers.
|
||||
|
||||
|
||||
|
||||
template<class FUN, class PAM =_Disabled>
|
||||
class FeedPrototype;
|
||||
|
||||
|
||||
/**
|
||||
* Configuration context for a FeedManifold.
|
||||
* This type-rebinding helper provides a storage configuration
|
||||
|
|
@ -412,51 +402,38 @@ namespace engine {
|
|||
};
|
||||
|
||||
|
||||
/**
|
||||
* Adapter to connect input/output buffers to a processing functor backed by an external library.
|
||||
* Essentially, this is structured storage tailored specifically to a given functor signature.
|
||||
* Tables of buffer handles are provided for the downstream code to store results received from
|
||||
* preceding nodes or to pick up calculated data after invocation. From these BuffHandle entries,
|
||||
* buffer pointers are retrieved and packaged suitably for use by the wrapped invocation functor.
|
||||
* This setup is intended for use by a »weaving pattern« within the invocation of a processing node
|
||||
* for the purpose of media processing or data calculation.
|
||||
*
|
||||
* # Interface exposed to down-stream code
|
||||
* Data fields are typed to suit the given functor \a FUN, and are present only when needed
|
||||
* - `param` holds a parameter value or tuple of values, as passed to the constructor
|
||||
* - `inBuff` and `outBuff` are chunks of \ref UninitialisedStorage with suitable dimension
|
||||
* to hold an array of \ref BuffHandle to organise input- and output-buffers
|
||||
* - the constants `FAN_P`, `FAN_I` and `FAN_O` reflect the number of individual elements
|
||||
* connected for parameters, inputs and outputs respectively.
|
||||
* - `inBuff.array()` and `outBuff.array()` expose the storage for handles as std::array,
|
||||
* with suitable dimension, subscript-operator and iteration. Note however that the
|
||||
* storage itself is _uninitialised_ and existing handles must be _emplaced_ by
|
||||
* invoking copy-construction e.g. `outBuff.createAt (idx, givenHandle)`
|
||||
* - after completely populating all BuffHandle slots this way, FeedManifold::connect()
|
||||
* will pick up buffer pointers and transfer them into the associated locations in
|
||||
* the input and output arguments `inArgs` and `outArgs`
|
||||
* - finally, FeedManifold::invoke() will trigger the stored processing functor,
|
||||
* passing `param`, `inArgs` and `outArgs` as appropriate.
|
||||
* The `constexpr` functions #hasInput() and #hasParam() can be used to find out
|
||||
* if the functor was classified to take inputs and / or parameters.
|
||||
* @note destructors of parameter values will be invoked, but nothing will be done
|
||||
* for the BuffHandle elements; the caller is responsible to perform the
|
||||
* buffer management protocol, i.e. invoke BuffHandle::emit()
|
||||
* and BuffHandle::release()
|
||||
*
|
||||
* @todo WIP-WIP 12/24 now adding support for arbitrary parameters /////////////////////////////////////TICKET #1386
|
||||
*/
|
||||
template<class FUN>
|
||||
struct FoldManifeed
|
||||
: util::NonCopyable
|
||||
{
|
||||
enum{ STORAGE_SIZ = _ProcFun<FUN>::MAXSZ };
|
||||
using BuffS = lib::UninitialisedStorage<BuffHandle,STORAGE_SIZ>;
|
||||
|
||||
BuffS inBuff;
|
||||
BuffS outBuff;
|
||||
};
|
||||
|
||||
/**
|
||||
* Adapter to connect input/output buffers to a processing functor backed by an external library.
|
||||
* Essentially, this is structured storage tailored specifically to a given functor signature.
|
||||
* Tables of buffer handles are provided for the downstream code to store results received from
|
||||
* preceding nodes or to pick up calculated data after invocation. From these BuffHandle entries,
|
||||
* buffer pointers are retrieved and packaged suitably for use by the wrapped invocation functor.
|
||||
* This setup is intended for use by a »weaving pattern« within the invocation of a processing node
|
||||
* for the purpose of media processing or data calculation.
|
||||
*
|
||||
* # Interface exposed to down-stream code
|
||||
* Data fields are typed to suit the given functor \a FUN, and are present only when needed
|
||||
* - `param` holds a parameter value or tuple of values, as passed to the constructor
|
||||
* - `inBuff` and `outBuff` are chunks of \ref UninitialisedStorage with suitable dimension
|
||||
* to hold an array of \ref BuffHandle to organise input- and output-buffers
|
||||
* - the constants `FAN_P`, `FAN_I` and `FAN_O` reflect the number of individual elements
|
||||
* connected for parameters, inputs and outputs respectively.
|
||||
* - `inBuff.array()` and `outBuff.array()` expose the storage for handles as std::array,
|
||||
* with suitable dimension, subscript-operator and iteration. Note however that the
|
||||
* storage itself is _uninitialised_ and existing handles must be _emplaced_ by
|
||||
* invoking copy-construction e.g. `outBuff.createAt (idx, givenHandle)`
|
||||
* - after completely populating all BuffHandle slots this way, FeedManifold::connect()
|
||||
* will pick up buffer pointers and transfer them into the associated locations in
|
||||
* the input and output arguments `inArgs` and `outArgs`
|
||||
* - finally, FeedManifold::invoke() will trigger the stored processing functor,
|
||||
* passing `param`, `inArgs` and `outArgs` as appropriate.
|
||||
* The `constexpr` functions #hasInput() and #hasParam() can be used to find out
|
||||
* if the functor was classified to take inputs and / or parameters.
|
||||
* @note destructors of parameter values will be invoked, but nothing will be done
|
||||
* for the BuffHandle elements; the caller is responsible to perform the
|
||||
* buffer management protocol, i.e. invoke BuffHandle::emit()
|
||||
* and BuffHandle::release()
|
||||
*/
|
||||
template<class FUN>
|
||||
struct FeedManifold
|
||||
: _StorageSetup<FUN>::Storage
|
||||
|
|
@ -672,65 +649,5 @@ namespace engine {
|
|||
}
|
||||
};
|
||||
|
||||
|
||||
/**
|
||||
* Adapter to handle a simple yet common setup for media processing
|
||||
* - somehow we can invoke processing as a simple function
|
||||
* - this function takes two arrays: the input- and output buffers
|
||||
* @remark this setup is useful for testing, and as documentation example;
|
||||
* actually the FeedManifold is mixed in as baseclass, and the
|
||||
* buffer pointers are retrieved from the BuffHandles.
|
||||
* @tparam MAN a FeedManifold, providing arrays of BuffHandles
|
||||
* @tparam FUN the processing function
|
||||
*/
|
||||
template<class MAN, class FUN>
|
||||
struct SimpleFunctionInvocationAdapter
|
||||
: MAN
|
||||
{
|
||||
using BuffI = remove_pointer_t<typename _ProcFun<FUN>::BuffI>;
|
||||
using BuffO = remove_pointer_t<typename _ProcFun<FUN>::BuffO>;
|
||||
|
||||
enum{ N = MAN::STORAGE_SIZ
|
||||
, FAN_I = _ProcFun<FUN>::FAN_I
|
||||
, FAN_O = _ProcFun<FUN>::FAN_O
|
||||
};
|
||||
|
||||
static_assert(FAN_I <= N and FAN_O <= N);
|
||||
|
||||
// using ArrayI = typename _ProcFun<FUN>::SigI;
|
||||
using ArrayI = typename _Fun<FUN>::Args::List::Head; ///////////////////TODO workaround for obsolete code, about to be removed
|
||||
using ArrayO = typename _ProcFun<FUN>::SigO;
|
||||
|
||||
|
||||
FUN process;
|
||||
|
||||
ArrayI inParam;
|
||||
ArrayO outParam;
|
||||
|
||||
template<typename...INIT>
|
||||
SimpleFunctionInvocationAdapter (INIT&& ...funSetup)
|
||||
: process{forward<INIT> (funSetup)...}
|
||||
{ }
|
||||
|
||||
|
||||
void
|
||||
connect (uint fanIn, uint fanOut)
|
||||
{
|
||||
REQUIRE (fanIn == FAN_I and fanOut == FAN_O); //////////////////////////OOO this distinction is a left-over from the idea of fixed block sizes
|
||||
for (uint i=0; i<FAN_I; ++i)
|
||||
inParam[i] = & MAN::inBuff[i].template accessAs<BuffI>();
|
||||
for (uint i=0; i<FAN_O; ++i)
|
||||
outParam[i] = & MAN::outBuff[i].template accessAs<BuffO>();
|
||||
}
|
||||
|
||||
void
|
||||
invoke()
|
||||
{
|
||||
process (inParam, outParam);
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
|
||||
}} // namespace steam::engine
|
||||
#endif /*ENGINE_FEED_MANIFOLD_H*/
|
||||
|
|
|
|||
|
|
@ -14,11 +14,11 @@
|
|||
|
||||
/** @file node-builder.hpp
|
||||
** Specialised shorthand notation for building the Render Node network.
|
||||
** During the Builder run, the render nodes network will be constructed by gradually
|
||||
** refining the connectivity structure derived from interpreting the »high-level model«
|
||||
** During the Builder run, the Render Node network will be constructed by gradually
|
||||
** refining the connectivity structure derived from interpreting the »high-level Model«
|
||||
** from the current Session. At some point, it is essentially clear what data streams
|
||||
** must be produced and what media processing functionality from external libraries
|
||||
** will be utilised to achieve this goal. This is when the fluent builder notation
|
||||
** will be utilised to achieve the goal. This is when the fluent builder notation
|
||||
** defined in this header comes into play, allowing to package the fine grained and
|
||||
** in part quite confusing details of parameter wiring and invocation preparation into
|
||||
** some goal oriented building blocks, that can be combined and directed with greater
|
||||
|
|
@ -84,7 +84,6 @@
|
|||
|
||||
|
||||
#include "lib/error.hpp"
|
||||
//#include "lib/symbol.hpp"//////////////////////////////TODO RLY?
|
||||
#include "lib/nocopy.hpp"
|
||||
#include "steam/engine/weaving-pattern-builder.hpp"
|
||||
#include "steam/engine/proc-node.hpp"
|
||||
|
|
@ -92,7 +91,6 @@
|
|||
#include "lib/several-builder.hpp"
|
||||
#include "lib/format-string.hpp"
|
||||
#include "lib/index-iter.hpp"
|
||||
#include "lib/test/test-helper.hpp"/////////////////////TODO TOD-oh
|
||||
|
||||
#include <utility>
|
||||
#include <vector>
|
||||
|
|
@ -102,7 +100,6 @@ namespace steam {
|
|||
namespace engine {
|
||||
namespace err = lumiera::error;
|
||||
|
||||
// using lib::Literal;
|
||||
using util::_Fmt;
|
||||
using std::forward;
|
||||
using std::move;
|
||||
|
|
@ -361,7 +358,6 @@ namespace engine {
|
|||
completePort()
|
||||
{
|
||||
weavingBuilder_.connectRemainingInputs (_Par::leads_, this->defaultPort_);
|
||||
weavingBuilder_.fillRemainingBufferTypes(); ////////////////////////////////////////////////////////////////////OOO Nein! sollte gleich zu Beginn (automatisch) passieren
|
||||
return NodeBuilder{static_cast<NodeBuilder<POL,DAT>&&> (*this) // slice away PortBulder subclass data
|
||||
,weavingBuilder_.sizMark
|
||||
,weavingBuilder_.build()};
|
||||
|
|
|
|||
|
|
@ -12,22 +12,85 @@
|
|||
*/
|
||||
|
||||
/** @file proc-node.hpp
|
||||
** Interface to the processing nodes and the render nodes network.
|
||||
**
|
||||
** Actually, there are three different interfaces to consider
|
||||
** Interface to the processing nodes and the Render Nodes network.
|
||||
** The Lumiera Render Engine is based on a graph of interconnected Render Nodes.
|
||||
** This »Low-level-Model« is pre-arranged _by a Builder_ as result of compiling
|
||||
** and interpreting the arrangement created by the user in the Session, known as
|
||||
** »High-level-Model«. All ways to possibly _perform_ (play, render) the current
|
||||
** arrangement are thus encoded into the configuration and connectivity of
|
||||
** ProcNode elements.
|
||||
**
|
||||
** # Usage
|
||||
**
|
||||
** Regarding access, there are three different interfaces to consider
|
||||
** - the ProcNode#pull is the invocation interface. It is function-call style
|
||||
** - the builder interface, comprised by the NodeFactory and the WiringFactory.
|
||||
** - the actual processing function is supposed to be a C function and will be
|
||||
** hooked up within a thin wrapper.
|
||||
** - the builder interface, comprising the NodeBuilder (TODO 2024 what else actually?).
|
||||
** - the control of playback and rendering processes is accomplished by the Player.
|
||||
** For actual processing, Lumiera relies on functionality provided by dedicated
|
||||
** domain libraries (e.g. FFmpeg for video processing). A binding implemented
|
||||
** as a Lumiera Plug-in will expose such a Library's resources as _Assets_
|
||||
** and will set up _function bindings_ to be embedded into Render Nodes.
|
||||
**
|
||||
** By using the builder interface, concrete node and wiring descriptor classes are created,
|
||||
** based on some templates. These concrete classes form the "glue" to tie the node network
|
||||
** together and contain much of the operation behaviour in a hard wired fashion.
|
||||
** By using the NodeBuilder interface, concrete \ref ProcNode and \ref Port instances
|
||||
** are created, interconnected and attached below the Fixture, which is the »backbone«
|
||||
** of the low-level-Model. The coordination and the act of invoking this NodeBuilder
|
||||
** is conducted by structures in the **Builder** subsystem of Lumiera, which works
|
||||
** similar to a compiler for programming languages — with the difference that within
|
||||
** this application an edit and media arrangement is compiled into „executable“ form,
|
||||
** ready for rendering and performance. In this context _performance_ implies to »play«
|
||||
** (render) part of a »Timeline«, which is accomplished through a PlayProcess, which
|
||||
** in turn breaks down the work into individual »Render Jobs« organised through the
|
||||
** [Scheduler](\ref scheduler.hpp). Through such a sequence of translations, the
|
||||
** processing of a frame ends up as a [job to invoke](\ref render-invocation.hpp)
|
||||
** some entrance point into the Render Node network.
|
||||
**
|
||||
** @todo WIP-WIP-WIP 2024 Node-Invocation is reworked from ground up for the »Playback Vertical Slice«
|
||||
** # Arrangement of Render Nodes
|
||||
**
|
||||
** @see nodefactory.hpp
|
||||
** @see operationpoint.hpp
|
||||
** The arrangement of ProcNode elements in the render graph exhibits a [DAG] topology.
|
||||
** Each node _knows only its direct predecessors,_ designated as »Lead Nodes« or »Leads«.
|
||||
** Conceptually, each Node represents a specific processing capability, delegating internally
|
||||
** to some actual Library implementation of the desired processing algorithm. Yet in reality,
|
||||
** several _flavours_ of this processing capability are typically required. For example, maybe
|
||||
** sound processing will be expected in stereo format (channel interleaved blocks of audio samples),
|
||||
** but in addition also the two individual mono channels will be required independently. Or a video
|
||||
** processing pipeline might be required in full resolution, but also sampled down for display in
|
||||
** a GUI viewer window or for thumbnail images. The existence of several _flavours of computation_
|
||||
** might seem obvious or irrelevant — yet touches on a fundamental decision: in Lumiera,
|
||||
** **no media processing happens beyond the Render Nodes.** Even for the down-sampled
|
||||
** tiny preview images, a render pipeline is specifically preconfigured, and then
|
||||
** exposed through a \ref Port on the Render Node.
|
||||
**
|
||||
** The actual rendering thus proceeds through the successive activation of Ports. Internally, each
|
||||
** Port is connected to _predecessor ports,_ which can be _»pulled«_ to generate the _input data_ for
|
||||
** the current processing step. Conducting the invocation of a single processing step in a Port thus
|
||||
** requires the interplay of several, intricately interwoven activities — forming a »Weaving Pattern«:
|
||||
** Establishing a frame, pulling from predecessors, spawning out further memory buffers to hold computed
|
||||
** result data and finally triggering the actual »weft«. The _Port on a Node_ is thus _an interface_,
|
||||
** actually implemented by a \ref Turnout, which comprises a _Weaving Pattern Template._ The most
|
||||
** common scheme for media processing is embodied by the \ref MediaWeavingPattern template, yet
|
||||
** other Weaving Patterns may be configured to adapt to different processing needs (e.g. hardware
|
||||
** accelerated computation).
|
||||
**
|
||||
** Templates in C++ _must be instantiated,_ with arguments specific to the usage. Which would be
|
||||
** the signature of the processing function, the types and number of input- and output buffers
|
||||
** and an additional tuple of specific parameters to pass, like e.g. the frame number or the
|
||||
** (possibly automated) parameter settings for an effect. The invocation of a Port thus calls
|
||||
** through a (classical, function-virtual) interface into specific code instantiated within the
|
||||
** Library-adapter Plug-in. At compile time, consistency of involved buffer types and function
|
||||
** signatures and memory allocation schemes can be ensured, so that no further checks and dynamic
|
||||
** adaptation and transformation is necessary at runtime. The engine implementation works on
|
||||
** data tuples, typed buffer pointers and helpers checked for memory safety — and not on
|
||||
** plain arrays and void pointers.
|
||||
** @remark A future extension to this scheme is conceivable, where common processing pipelines
|
||||
** are pre-compiled in entirety, possibly combined with hardware acceleration.
|
||||
**
|
||||
** @todo WIP-WIP 12/2024 Node-Invocation is reworked from ground up for the »Playback Vertical Slice«
|
||||
**
|
||||
** @see turnout.hpp
|
||||
** @see turnout-system.hpp
|
||||
** @see node-builder.hpp
|
||||
** @see node-link.test.cpp
|
||||
** [DAG]: https://en.wikipedia.org/wiki/Directed_acyclic_graph
|
||||
*/
|
||||
|
||||
#ifndef STEAM_ENGINE_PROC_NODE_H
|
||||
|
|
@ -68,8 +131,7 @@ namespace engine {
|
|||
|
||||
|
||||
class Port
|
||||
// : util::MoveOnly //////////////////////////////////////////////////OOO not clear if necessary here, and requires us to declare the ctors!!! See Turnout
|
||||
: util::NonCopyable //////////////////////////////////////////////////OOO this would be the perfect solution, if we manage to handle this within the builder
|
||||
: util::NonCopyable
|
||||
{
|
||||
public:
|
||||
virtual ~Port(); ///< this is an interface
|
||||
|
|
@ -77,9 +139,6 @@ namespace engine {
|
|||
|
||||
virtual BuffHandle weave (TurnoutSystem&, OptionalBuff =std::nullopt) =0;
|
||||
|
||||
// // compiler does not generate a move-ctor automatically due to explicit dtor
|
||||
// Port() = default;
|
||||
// Port(Port&&) = default;
|
||||
ProcID& procID;
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -14,7 +14,6 @@
|
|||
|
||||
/** @file render-invocation.cpp
|
||||
** Implementation details regarding the invocation of a single render node
|
||||
** @deprecated very likely to happen in a different way, while the concept remains valid
|
||||
** @todo WIP-WIP-WIP 12/2024 about to build a Render Node invocation, combining the old
|
||||
** unfinished draft from 2009 with the new Render Engine code
|
||||
*/
|
||||
|
|
@ -54,8 +53,8 @@ namespace engine {
|
|||
size_t
|
||||
RenderInvocation::hashOfInstance (InvocationInstanceID invoKey) const
|
||||
{ ////////////////////////////////////////////////TICKET #1295 : settle upon the parameters actually needed and decide what goes into this hash
|
||||
std::hash<size_t> hashr;
|
||||
HashVal res = hashr (invoKey.frameNumber);
|
||||
std::hash<size_t> hashCalc;
|
||||
HashVal res = hashCalc (invoKey.frameNumber);
|
||||
return res;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -13,18 +13,18 @@
|
|||
|
||||
|
||||
/** @file turnout-system.hpp
|
||||
** THe actual state of a frame rendering evaluation parametrised for a single job.
|
||||
** The rendering of frames is triggered from a render job, and recursively retrieves the data
|
||||
** from predecessor render nodes, prepared, configured and interconnected by the Builder.
|
||||
** A actual state of a rendering evaluation parametrised for a single job.
|
||||
** The rendering of frames is triggered from a render job, and recursively retrieves data
|
||||
** from predecessor Render Nodes, prepared, configured and interconnected by the Builder.
|
||||
** Some statefull aspects can be involved into this recursive evaluation, beyond the data
|
||||
** passed directly through the recursive calls and interconnected data buffers. Notably,
|
||||
** some operations need direct call parameters, e.g. the frame number to retrieve or
|
||||
** the actual parametrisation of an effect, which draws from _parameter automation._
|
||||
** Moreover, when rendering interactively, parts of the render pipeline may be
|
||||
** changed dynamically by mute toggles or selecting an output in the viever's
|
||||
** _Switch Board.
|
||||
** _Switch Board._
|
||||
**
|
||||
** The TurnoutSystem is related to the actual incidence and is created dynamically,
|
||||
** The TurnoutSystem is related to the actual invocation and is created dynamically,
|
||||
** while connecting to all the pre-existing \ref Turnout elements, sitting in the ports
|
||||
** of those render nodes touched by the actual render invocation. It acts as mediator and
|
||||
** data exchange hub, while gearing up the actual invocation to cause calculation of media data
|
||||
|
|
|
|||
|
|
@ -15,36 +15,39 @@
|
|||
|
||||
/** @file turnout.hpp
|
||||
** Fixed standard setup used in each Port of the Render Node to generate data.
|
||||
** Organise the state related to the invocation of s single ProcNode::pull() call
|
||||
** This header defines part of the "glue" which holds together the render node network
|
||||
** and enables to pull result frames from the nodes. Doing so requires some invocation
|
||||
** local state to be maintained, especially a table of buffers used to carry out the
|
||||
** calculations. Further, getting the input buffers filled requires to issue recursive
|
||||
** \c pull() calls, which on the whole creates a stack-like assembly of local invocation
|
||||
** This header defines part of the "glue" which holds together the Render Node network
|
||||
** and enables to pull result frames from the nodes. Doing so requires some local state
|
||||
** to be maintained, especially a collection of buffers used to hold data for computation.
|
||||
** Furthermore, getting the input buffers filled with prerequisite data leads to the issuance
|
||||
** of recursive `weave()` calls, together creating a stack-like assembly of local invocation
|
||||
** state.
|
||||
** The actual steps to be carried out for a \c pull() call are dependent on the configuration
|
||||
** of the node to pull. Each node has been preconfigured by the builder with a Connectivity
|
||||
** descriptor and a concrete type of a StateAdapter. The actual sequence of steps is defined
|
||||
** in the header nodeoperation.hpp out of a set of basic operation steps. These steps all use
|
||||
** the passed in Invocation object (a sub-interface of StateAdapter) to access the various
|
||||
** aspects of the invocation state.
|
||||
**
|
||||
** # composition of the Invocation State
|
||||
**
|
||||
** For each individual ProcNode#pull() call, the WiringAdapter#callDown() builds an StateAdapter
|
||||
** instance directly on the stack, managing the actual buffer pointers and state references. Using this
|
||||
** StateAdapter, the predecessor nodes are pulled. The way these operations are carried out is encoded
|
||||
** in the actual StateAdapter type known to the NodeWiring (WiringAdapter) instance. All of these actual
|
||||
** StateAdapter types are built as implementing the engine::StateClosure interface.
|
||||
**
|
||||
** @todo relies still on an [obsoleted implementation draft](\ref bufftable-obsolete.hpp)
|
||||
** @see engine::ProcNode
|
||||
** @see engine::StateProxy
|
||||
** @see engine::FeedManifold
|
||||
** @see nodewiring.hpp interface for building/wiring the nodes
|
||||
**
|
||||
** @warning as of 4/2023 a complete rework of the Dispatcher is underway ///////////////////////////////////////////TICKET #1275
|
||||
**
|
||||
** The actual steps to be carried out for a `weave()` call are broken down into
|
||||
** a fixed arrangement of steps, in accordance to the _weaving metaphor:_
|
||||
** - `mount()` establish the framework of operation
|
||||
** - `pull()` recurse into predecessors to retrieve input data
|
||||
** - `shed()` allocate output buffers and spread out all connections
|
||||
** - `weft()` pass invocation to the processing operation
|
||||
** - `fix()` detach from input, mark and commit results and pas output
|
||||
** As arranged in the Turnout template, the necessary interconnections are prepared
|
||||
** and this standard sequence of operations is issued, while delegating the actual
|
||||
** implementation of these steps into a **Weaving Pattern**, integrated as mix-in
|
||||
** base template. Notably an implementation data scheme is expected as a definition
|
||||
** nested into the weaving pattern, designated as `PAT::Feed`, and created
|
||||
** _on the stack for each invocation_ by the `mount()` call. »The Feed«
|
||||
** is conceived both as an _Invocation Adapter_ and a _Pipe Manifold:_
|
||||
** - embedding an adapted processing-functor and a parameter-functor
|
||||
** - providing storage slots for \ref BuffHandle management entries
|
||||
** @note typically, a \ref MediaWeavingPattern is used as default implementation.
|
||||
** @remark The name »Turnout« plays upon the overlay of several metaphors, notably
|
||||
** the [Railroad Turnout]. A »Turnout System« may thus imply either a system for
|
||||
** generating and collecting turnout, or the complex interwoven system of tracks
|
||||
** and switches found in large railway stations.
|
||||
** @see \ref proc-node.hpp "Overview of Render Node structures"
|
||||
** @see turnout-system.hpp
|
||||
** @see weaving-pattern.hpp
|
||||
** @see weaving-pattern-builder.hpp
|
||||
** [Railroad Turnout]: https://en.wikipedia.org/wiki/Railroad_turnout
|
||||
*/
|
||||
|
||||
|
||||
|
|
@ -65,7 +68,7 @@ namespace engine {
|
|||
|
||||
|
||||
/**
|
||||
* Definition to emulate a _Concept_ for the *Invocation Adapter*.
|
||||
* Definition to emulate a _Concept_ for the **Invocation Adapter**.
|
||||
* For each Proc-Asset, the corresponding Library Adapter must provide
|
||||
* such adapters to access the input and result buffers and finally to
|
||||
* invoke the processing functions from this library.
|
||||
|
|
@ -110,7 +113,6 @@ namespace engine {
|
|||
class Turnout
|
||||
: public Port
|
||||
, public PAT
|
||||
// , util::MoveOnly
|
||||
{
|
||||
static_assert (_verify_usable_as_WeavingPattern<PAT>());
|
||||
|
||||
|
|
@ -131,10 +133,10 @@ namespace engine {
|
|||
BuffHandle
|
||||
weave (TurnoutSystem& turnoutSys, OptionalBuff outBuff =std::nullopt) override
|
||||
{
|
||||
Feed feed = PAT::mount(turnoutSys);
|
||||
PAT::pull(feed, turnoutSys);
|
||||
PAT::shed(feed, outBuff);
|
||||
PAT::weft(feed);
|
||||
Feed feed = PAT::mount (turnoutSys);
|
||||
PAT::pull (feed, turnoutSys);
|
||||
PAT::shed (feed, outBuff);
|
||||
PAT::weft (feed);
|
||||
return PAT::fix (feed);
|
||||
}
|
||||
};
|
||||
|
|
|
|||
|
|
@ -15,20 +15,61 @@
|
|||
/** @file weaving-pattern-builder.hpp
|
||||
** Construction kit to establish an invocation scheme for media calculations.
|
||||
** Adapters and configuration is provided to invoke the actual _media processing function_
|
||||
** in accordance to a fixed _wiring scheme:_
|
||||
** - the function takes two arguments
|
||||
** - these are an array of input and output buffer pointers
|
||||
** - buffer sizes or types are assumed to be uniform over all »slots«
|
||||
** - yet the input side my use another type than the output side
|
||||
** @todo as of 10/2024, this scheme is established as prototype to explore how processing nodes
|
||||
** can be build, connected and invoked; the expectation is however that this simple scheme
|
||||
** is suitable to adapt and handle many common cases of invoking media processing functions,
|
||||
** because the given _functor_ is constructed within a plug-in tailored to a specific
|
||||
** media processing library (e.g. FFmpeg) and thus can be a lambda to forward to the
|
||||
** actual function.
|
||||
** in accordance to a wiring scheme as implied by the _signature_ of the bound function.
|
||||
** - the function takes one to three arguments
|
||||
** - these are related to the parameters, the input and the output (always in that order)
|
||||
** - the specification of at least one output buffer is mandatory
|
||||
** - a function may omit input and / or the parameter «slot»
|
||||
** - multiple items of the same kind (output, input, parameter) can be packaged
|
||||
** into a heterogeneous tuple, or given as an array of identically typed elements;
|
||||
** yet a single value can be accepted directly as function argument.
|
||||
** - input/output buffers are recognisable as pointers, while parameters are value data.
|
||||
** - pointers and parameter values are typed, which is used internally to ensure passing
|
||||
** the right value to the corresponding item and to ensure suitable memory allocations.
|
||||
** @note steam::engine::Turnout mixes-in the steam::engine::MediaWeavingPattern, which in turn
|
||||
** inherits from an *Invocation Adapter* given as template parameter. So this constitutes
|
||||
** an *extension point* where other, more elaborate invocation schemes could be integrated.
|
||||
** inherits from a FeedManifold given as template parameter. So this constitutes an
|
||||
** **extension point** where other, more elaborate invocation schemes could be integrated.
|
||||
**
|
||||
** # Preparing a FeedManifold and handling invocation parameters
|
||||
**
|
||||
** Detection of the processing function signature with all possible variations as detailed above
|
||||
** is the responsibility of the [FeedManifold template](\ref feed-manifold.hpp). For each distinct
|
||||
** signature, a suitable data layout is generated, including storage to hold the processing-functor
|
||||
** itself (which is embedded as a clone-copy to expose the actual invocation to the optimiser in
|
||||
** the C++ compiler). The WeavingBuilder defined here is used to build a Port implementation and
|
||||
** thus a specific »Weaving Pattern«, which — at the actual Node invocation — will in turn build
|
||||
** the concrete FeedManifold instance into local stack memory. For this reason, the Port can be
|
||||
** understood as the Level-1 builder, whereas the Port / Weaving Builder is classified as Level-2
|
||||
** and a processing and link-builder operating on top of the former is designated as Level-3.
|
||||
**
|
||||
** The actual type of the FeedManifold, including all the specifics of the data layout, becomes
|
||||
** embedded into the Port implementation (≙Weaving Pattern) by means of a FeedPrototype instance.
|
||||
** Furthermore, a parameter-functor can be installed there, to generate actual parameter data
|
||||
** whenever the FeedPrototype generates a new FeedManifold instance for the next render invocation.
|
||||
** The parameter data (and a copy of the processing-functor) is stored alongside in this generation
|
||||
** step, and thus available in local stack memory during an extended (possibly recursive) render
|
||||
** invocation sequence.
|
||||
**
|
||||
** Invocation parameters are a crucial ingredient for each invocation, yet the responsibility for
|
||||
** the parameter-functor to produce these parameters lies in a different part of the system than
|
||||
** the responsibility for configuring the processing functor. The reason is simply that the
|
||||
** setup of actual parameters is an essential part of the user's work on the edit in the Session.
|
||||
** The control flow for parameter thus traces back into the session, while on the other hand the
|
||||
** processing-functor must be configured by an external media-library adapter Plug-in. So this
|
||||
** creates the challenge that in actual use the PortBuilder will be passed through several realms.
|
||||
** Firstly, the external library binding will be invoked to set up a processing-functor, and then,
|
||||
** in a separate step, the same PortBuilder instance, unfinished at that point, will be passed to
|
||||
** the code responsible for configuring parameters and _Parameter Automation._ Only after that,
|
||||
** the _terminal builder operation_ WeavingBuilder::build() will be invoked, and the control
|
||||
** flow in the Lumiera Builder subsystem proceeds to outfitting the next Render Node.
|
||||
** This intricate sequence of configuration step translates into the necessity to build the
|
||||
** FeedPrototype first in its basic form, without a parameter-functor. The second configuration
|
||||
** step performed later will then have to re-shape the FeedPrototype to add a parameter-functor.
|
||||
** This amounts to a move-copy, thereby changing the FeedPrototype's template arguments to
|
||||
** the full signature, including the type of the parameter functor. In this final shape,
|
||||
** it can be integrated into a Turnout instance and dropped off into the PatternData, which
|
||||
** is used to record configuration for the actual storage allocation and node generation step
|
||||
** performed later.
|
||||
**
|
||||
**
|
||||
** # Interplay of NodeBuider, PortBuilder and WeavingBuilder
|
||||
|
|
@ -50,9 +91,9 @@
|
|||
** setup of a function invocation, with appropriate _wiring_ of input and output connections.
|
||||
** For example, an audio filtering function may be exposed on port-#1 for stereo sound, while
|
||||
** port-#2 may process the left, and port-#3 the right channel in isolation. It is entirely
|
||||
** up to the library-adapter-plug-in what processing functions to expose, and in which flavours.
|
||||
** The WeavingBuilder is used to generate a single \ref Turnout object, which corresponds to
|
||||
** the invocation of a single port and thus one flavour of processing.
|
||||
** up to the library-adapter-plug-in to decide what processing functions to expose, and in
|
||||
** which flavours. The WeavingBuilder is used to generate a single \ref Turnout object,
|
||||
** which corresponds to the invocation of a single port and thus one flavour of processing.
|
||||
**
|
||||
** At one architectural level above, the \ref NodeBuilder exposes the ability to set up a
|
||||
** ProcNode, complete with several ports and connected to possibly several predecessor nodes.
|
||||
|
|
@ -90,11 +131,7 @@
|
|||
#ifndef STEAM_ENGINE_WEAVING_PATTERN_BUILDER_H
|
||||
#define STEAM_ENGINE_WEAVING_PATTERN_BUILDER_H
|
||||
|
||||
//#include "steam/common.hpp"
|
||||
#include "lib/error.hpp"
|
||||
#include "lib/symbol.hpp"
|
||||
//#include "steam/engine/channel-descriptor.hpp"
|
||||
//#include "vault/gear/job.h"
|
||||
#include "lib/several-builder.hpp"
|
||||
#include "steam/engine/proc-id.hpp"
|
||||
#include "steam/engine/engine-ctx.hpp"
|
||||
|
|
@ -102,18 +139,13 @@
|
|||
#include "steam/engine/buffer-provider.hpp"
|
||||
#include "steam/engine/buffhandle-attach.hpp" /////////////////OOO why do we need to include this? we need the accessAs<TY>() template function
|
||||
#include "lib/meta/tuple-helper.hpp"
|
||||
#include "lib/test/test-helper.hpp" ////////////////////////////OOO TODO added for test
|
||||
//#include "lib/test/test-helper.hpp" ////////////////////////////OOO TODO added for test
|
||||
#include "lib/format-string.hpp"
|
||||
#include "lib/iter-zip.hpp"
|
||||
//#include "lib/util-foreach.hpp"
|
||||
//#include "lib/iter-adapter.hpp"
|
||||
//#include "lib/meta/function.hpp"
|
||||
//#include "lib/itertools.hpp"
|
||||
#include "lib/util.hpp"
|
||||
|
||||
//#include <utility>
|
||||
#include <functional>
|
||||
//#include <array>
|
||||
#include <utility>
|
||||
#include <vector>
|
||||
#include <string>
|
||||
|
||||
|
|
@ -124,8 +156,6 @@ namespace engine {
|
|||
|
||||
using StrView = std::string_view;
|
||||
using std::forward;
|
||||
// using lib::Literal;
|
||||
using lib::Several;
|
||||
using lib::Depend;
|
||||
using lib::izip;
|
||||
using util::_Fmt;
|
||||
|
|
@ -133,47 +163,13 @@ namespace engine {
|
|||
|
||||
|
||||
|
||||
/**
|
||||
* Typical base configuration for a Weaving-Pattern chain:
|
||||
* - use a simple processing function
|
||||
* - pass an input/output buffer array to this function
|
||||
* - map all »slots« directly without any re-ordering
|
||||
* - use a sufficiently sized FeedManifold as storage scheme
|
||||
* @remark actual media handling plug-ins may choose to
|
||||
* employ more elaborate _invocation adapters_
|
||||
* specifically tailored to the library's needs.
|
||||
*/
|
||||
template<class FUN>
|
||||
struct DirectFunctionInvocation
|
||||
: util::MoveOnly
|
||||
{
|
||||
enum{ MAX_SIZ = _ProcFun<FUN>::MAXSZ };
|
||||
using Manifold = FoldManifeed<FUN>; ////////////////////////////////////////////////////OOO temporary fork between old-style and new-style implementation
|
||||
using Feed = SimpleFunctionInvocationAdapter<Manifold, FUN>;
|
||||
|
||||
std::function<Feed()> buildFeed;
|
||||
|
||||
/** when building the Turnout, prepare the _invocation adapter_
|
||||
* @note processing function \a fun is bound by value into the closure,
|
||||
* so that each invocation will create a copy of that function,
|
||||
* embedded (and typically inlined) into the invocation adapter.
|
||||
*/
|
||||
DirectFunctionInvocation(FUN fun)
|
||||
: buildFeed{[=]{ return Feed{fun}; }}
|
||||
{ }
|
||||
};
|
||||
|
||||
|
||||
|
||||
|
||||
template<class POL, class I, class E=I>
|
||||
using DataBuilder = lib::SeveralBuilder<I,E, POL::template Policy>;
|
||||
|
||||
template<uint siz>
|
||||
using SizMark = std::integral_constant<uint,siz>;
|
||||
|
||||
|
||||
/////////////////////////////////////////////////////////////////////////////////////////////////////////////TICKET #1371 : Prototyping: how to assemble a Turnout
|
||||
|
||||
|
||||
/**
|
||||
* Recursive functional data structure to collect weaving pattern data
|
||||
|
|
@ -226,8 +222,6 @@ namespace engine {
|
|||
|
||||
|
||||
|
||||
// template<class FUN>
|
||||
// using SimpleDirectInvoke = MediaWeavingPattern<DirectFunctionInvocation<FUN>>;
|
||||
|
||||
|
||||
/**
|
||||
|
|
@ -247,7 +241,6 @@ namespace engine {
|
|||
struct WeavingBuilder
|
||||
: util::MoveOnly
|
||||
{
|
||||
using FunSpec = _ProcFun<FUN>; ///////////////////////////////////TODO remove this!!!
|
||||
using Prototype = typename FeedManifold<FUN>::Prototype;
|
||||
using WeavingPattern = MediaWeavingPattern<Prototype>;
|
||||
using TurnoutWeaving = Turnout<WeavingPattern>;
|
||||
|
|
@ -293,34 +286,6 @@ namespace engine {
|
|||
return move(*this);
|
||||
}
|
||||
|
||||
template<class BU>
|
||||
WeavingBuilder&&
|
||||
appendBufferTypes (uint cnt)
|
||||
{
|
||||
if (buffTypes.size()+cnt > FAN_O)
|
||||
throw err::Logic{_Fmt{"Builder: attempt add %d further output buffers, "
|
||||
"while %d of %d possible outputs are already connected."}
|
||||
% cnt % buffTypes.size() % FAN_O
|
||||
};
|
||||
while (cnt--)
|
||||
buffTypes.emplace_back([](BufferProvider& provider)
|
||||
{ return provider.getDescriptor<BU>(); });
|
||||
ENSURE (buffTypes.size() <= FAN_O);
|
||||
return move(*this);
|
||||
}
|
||||
|
||||
/** @deprecated handling of output buffer configuration should be "the other way round":
|
||||
* Instead of filling-in, a default should be established at start,
|
||||
* which can then arbitrarily be refined
|
||||
*/
|
||||
WeavingBuilder&&
|
||||
fillRemainingBufferTypes() ///////////////////////////////////////////////////OOO Buffer-Typen gleich zu Beginn default-belegen
|
||||
{
|
||||
using BuffO = typename FunSpec::BuffO;
|
||||
uint cnt = FAN_O - buffTypes.size();
|
||||
ENSURE (cnt == 0); ///////////////////////////////////////////////////////////////////OOO already filled in constructor now -- remove this code
|
||||
return appendBufferTypes<BuffO>(cnt);
|
||||
}
|
||||
|
||||
WeavingBuilder&&
|
||||
connectRemainingInputs (DataBuilder<POL, ProcNodeRef>& knownLeads, uint defaultPort)
|
||||
|
|
@ -383,69 +348,69 @@ ENSURE (cnt == 0); /////////////////////////////////////////////////////////////
|
|||
};
|
||||
}
|
||||
|
||||
private:
|
||||
void
|
||||
maybeFillDefaultProviders (size_t maxSlots)
|
||||
{
|
||||
for (uint i=providers.size(); i < maxSlots; ++i)
|
||||
providers.emplace_back (ctx().mem);
|
||||
}
|
||||
|
||||
/**
|
||||
* @internal configuration builder for buffer descriptors
|
||||
* @tparam BU target type of the buffer (without pointer)
|
||||
* The FeedPrototype can generate for the given \a FUN a
|
||||
* type sequence of output buffer types, which are used
|
||||
* to instantiate this template and then later to work
|
||||
* on specific output buffer slots.
|
||||
*/
|
||||
template<typename BU>
|
||||
struct BufferDescriptor
|
||||
{
|
||||
/**
|
||||
* Setup the constructor function for the default BufferDescriptors.
|
||||
* @return a functor that can be applied to the actual BufferProviders
|
||||
* at the point when everything for this port is configured.
|
||||
*/
|
||||
TypeMarker
|
||||
makeBufferDescriptor() const
|
||||
{
|
||||
return [](BufferProvider& provider)
|
||||
{ return provider.getDescriptor<BU>(); };
|
||||
}
|
||||
};
|
||||
|
||||
using OutTypesDescriptors = typename Prototype::template OutTypesApply<BufferDescriptor>;
|
||||
using OutDescriptorTup = lib::meta::Tuple<OutTypesDescriptors>;
|
||||
|
||||
/** A tuple of BufferDescriptor instances for all output buffer types */
|
||||
static constexpr OutDescriptorTup outDescriptors{};
|
||||
|
||||
/** @internal pre-initialise the buffTypes vector with a default configuration.
|
||||
* @remarks In the _terminal step,_ the buffTypes will be transformed into a
|
||||
* sequence of BufferDescriptor entries, which can later be used
|
||||
* by the node invocation to prepare a set of output buffers.
|
||||
* - each slot holds a function<BufferDescripter(BufferProvider&)>
|
||||
* - these can be used to configure specific setup for some buffers
|
||||
* - the default BufferDescriptor will just default-construct the
|
||||
* designated «output slot» of the media processing-function.
|
||||
*/
|
||||
static auto
|
||||
fillDefaultBufferTypes()
|
||||
{
|
||||
std::vector<TypeMarker> defaultBufferTypes;
|
||||
defaultBufferTypes.reserve (std::tuple_size_v<OutDescriptorTup>);
|
||||
lib::meta::forEach(outDescriptors
|
||||
,[&](auto& desc)
|
||||
{
|
||||
defaultBufferTypes.emplace_back(
|
||||
desc.makeBufferDescriptor());
|
||||
});
|
||||
return defaultBufferTypes;
|
||||
}
|
||||
|
||||
|
||||
private: /* ====== WeavingBuilder implementation details ====== */
|
||||
void
|
||||
maybeFillDefaultProviders (size_t maxSlots)
|
||||
{
|
||||
for (uint i=providers.size(); i < maxSlots; ++i)
|
||||
providers.emplace_back (ctx().mem);
|
||||
}
|
||||
|
||||
/**
|
||||
* @internal configuration builder for buffer descriptors
|
||||
* @tparam BU target type of the buffer (without pointer)
|
||||
* The FeedPrototype can generate for the given \a FUN a
|
||||
* type sequence of output buffer types, which are used
|
||||
* to instantiate this template and then later to work
|
||||
* on specific output buffer slots.
|
||||
*/
|
||||
template<typename BU>
|
||||
struct BufferDescriptor
|
||||
{
|
||||
/**
|
||||
* Setup the constructor function for the default BufferDescriptors.
|
||||
* @return a functor that can be applied to the actual BufferProviders
|
||||
* at the point when everything for this port is configured.
|
||||
*/
|
||||
TypeMarker
|
||||
makeBufferDescriptor() const
|
||||
{
|
||||
return [](BufferProvider& provider)
|
||||
{ return provider.getDescriptor<BU>(); };
|
||||
}
|
||||
};
|
||||
|
||||
using OutTypesDescriptors = typename Prototype::template OutTypesApply<BufferDescriptor>;
|
||||
using OutDescriptorTup = lib::meta::Tuple<OutTypesDescriptors>;
|
||||
|
||||
/** A tuple of BufferDescriptor instances for all output buffer types */
|
||||
static constexpr OutDescriptorTup outDescriptors{};
|
||||
|
||||
/** @internal pre-initialise the buffTypes vector with a default configuration.
|
||||
* @remarks In the _terminal step,_ the buffTypes will be transformed into a
|
||||
* sequence of BufferDescriptor entries, which can later be used
|
||||
* by the node invocation to prepare a set of output buffers.
|
||||
* - each slot holds a function<BufferDescripter(BufferProvider&)>
|
||||
* - these can be used to configure specific setup for some buffers
|
||||
* - the default BufferDescriptor will just default-construct the
|
||||
* designated «output slot» of the media processing-function.
|
||||
*/
|
||||
static auto
|
||||
fillDefaultBufferTypes()
|
||||
{
|
||||
std::vector<TypeMarker> defaultBufferTypes;
|
||||
defaultBufferTypes.reserve (std::tuple_size_v<OutDescriptorTup>);
|
||||
lib::meta::forEach(outDescriptors
|
||||
,[&](auto& desc)
|
||||
{
|
||||
defaultBufferTypes.emplace_back(
|
||||
desc.makeBufferDescriptor());
|
||||
});
|
||||
return defaultBufferTypes;
|
||||
}
|
||||
};
|
||||
/////////////////////////////////////////////////////////////////////////////////////////////////////////////TICKET #1367 : (End)Prototyping: how to assemble a Turnout
|
||||
|
||||
|
||||
|
||||
}}// namespace steam::engine
|
||||
|
|
|
|||
|
|
@ -15,34 +15,118 @@
|
|||
|
||||
/** @file weaving-pattern.hpp
|
||||
** Construction set to assemble and operate a data processing scheme within a Render Node.
|
||||
** This header defines part of the "glue" which holds together the render node network
|
||||
** and enables to pull result frames from the nodes. Doing so requires some invocation
|
||||
** local state to be maintained, especially a table of buffers used to carry out the
|
||||
** calculations. Further, getting the input buffers filled requires to issue recursive
|
||||
** \c pull() calls, which on the whole creates a stack-like assembly of local invocation
|
||||
** state.
|
||||
** The actual steps to be carried out for a \c pull() call are dependent on the configuration
|
||||
** of the node to pull. Each node has been preconfigured by the builder with a Connectivity
|
||||
** descriptor and a concrete type of a StateAdapter. The actual sequence of steps is defined
|
||||
** in the header nodeoperation.hpp out of a set of basic operation steps. These steps all use
|
||||
** the passed in Invocation object (a sub-interface of StateAdapter) to access the various
|
||||
** aspects of the invocation state.
|
||||
** Together with turnout.hpp, this header provides the "glue" which holds together the
|
||||
** typical setup of a Render Node network for processing media data. A MediaWeavingPattern
|
||||
** implements the sequence of steps — as driven by the Turnout — to combine the invocation
|
||||
** of media processing operations from external Libraries with the buffer- and parameter
|
||||
** management provided by the Lumiera Render Engine. Since these operations are conducted
|
||||
** concurrently, all invocation state has to be maintained in local storage on the stack.
|
||||
**
|
||||
** # Integration with media handling Libraries
|
||||
**
|
||||
** A Render invocation originates from a [Render Job](\ref render-invocation.hpp), which first
|
||||
** establishes a TurnoutSystem and then enters into the recursive Render Node activation by
|
||||
** invoking Port::weave() for the »Exit Node«, as defined by the job's invocation parameters.
|
||||
** The first step in the processing cycle, as established by the Port implementation (\ref Turnout),
|
||||
** is to build a »Feed instance«, from the invocation of `mount(TurnoutSystem&)`.
|
||||
**
|
||||
** Generally speaking, a `Feed` fulfils the role of an _Invocation Adapter_ and a _Manifold_ of
|
||||
** data connections. The standard implementation, as given by MediaWeavingPattern, relies on a
|
||||
** combination of both into a \ref FeedManifold. This is a flexibly configured data adapter,
|
||||
** directly combined with an embedded _adapter functor_ to wrap the invocation of processing
|
||||
** operations provided by an external library.
|
||||
**
|
||||
** Usually some kind of internal systematics is assumed and applied within such a library.
|
||||
** Operations can be exposed as plain function to invoke, or through some configuration and
|
||||
** builder notion. Function arguments tend to follow a common arrangement and naming scheme,
|
||||
** also assuming specific arrangement and data layout for input and output data. This kind of
|
||||
** schematism is rooted in something deeper: exposing useful operations as a library collection
|
||||
** requires a common ground, an understanding about the _order of things_ to be treated — at least
|
||||
** for those kind of things, which fall into a specific _domain,_ when tasks related to such a
|
||||
** domain shall be supported by the Library. Such an (implicit or explicit) framework of structuring
|
||||
** is usually designated as a **Domain Ontology** (in contrast to the questions pertaining Ontology
|
||||
** in general, which are the subject of philosophy proper). Even seemingly practical matters like
|
||||
** processing media data do rely on fundamental assumptions and basic premises regarding what is
|
||||
** at stake and what shall be subject to treatment, what fundamental entities and relationships
|
||||
** to consider within the domain. Incidentally, many of these assumptions are positive in nature
|
||||
** and not necessarily a given — which is the root of essential incompatibilities between Libraries
|
||||
** targeting a similar domain: due to such fundamental differences, they just can not totally agree
|
||||
** upon what kinds of things to expect and where to draw the line of distinction.
|
||||
**
|
||||
** The Lumiera Render Engine and media handling framework is built in a way _fundamentally agnostic_
|
||||
** to the specific presuppositions of this or that media handling library. By and large, decisions,
|
||||
** distinctions and qualifications are redirected back into the scope of the respective library, by
|
||||
** means of a media-library adapter plug-in. Assuming that the user _in fact understands_ the meaning
|
||||
** of and reasoning behind employing a given library, the _mere handling_ of the related processing
|
||||
** can be reduced to a small set of organisational traits. For sake of consistency, you may label
|
||||
** these as a »Render Engine Ontology«. In all brevity,
|
||||
** - We assume that the library provides distinguishable processing operations
|
||||
** that can be structured and classified and managed as _processing assets,_
|
||||
** - we assume that processing is applied to sources or »media« and that
|
||||
** the result of processing is again a source that can be processed further;
|
||||
** - specific operations can thus be conceptualised as processing-stages or _Nodes,_
|
||||
** interconnected by _media streams,_ which can be tagged with a _stream type._
|
||||
** - At implementation level, such streams can be represented in entirety as data buffers
|
||||
** of a specific layout, filled with some »frame« or chunk of data
|
||||
** - and the single processing step or operation can be completely encapsulated
|
||||
** as a pure function (referentially transparent, without side effects);
|
||||
** - all state and parametrisation can be represented either as some further data stream in/out,
|
||||
** or as parameters-of-processing, which can be passed as a set of values to the function
|
||||
** prior of invocation, thereby completely determining the observable behaviour.
|
||||
**
|
||||
** # composition of the Invocation State
|
||||
**
|
||||
** For each individual ProcNode#pull() call, the WiringAdapter#callDown() builds an StateAdapter
|
||||
** instance directly on the stack, managing the actual buffer pointers and state references. Using this
|
||||
** StateAdapter, the predecessor nodes are pulled. The way these operations are carried out is encoded
|
||||
** in the actual StateAdapter type known to the NodeWiring (WiringAdapter) instance. All of these actual
|
||||
** StateAdapter types are built as implementing the engine::StateClosure interface.
|
||||
** By means of this positing, the attachment point to an external library can be reduced into a small
|
||||
** number of connector links. Handling the capabilities of a library within the Session and high-level Model
|
||||
** will require some kind of _registration,_ which is beyond the scope of this discussion here. As far as
|
||||
** the Render Engine and the low-level-Model is concerned, any usage of the external libraries capabilities
|
||||
** can be reduced into...
|
||||
** - Preparing an adapter functor, designated as **processing-functor**. This functor takes three kinds of
|
||||
** arguments, each packaged as a single function call argument, which may either be a single item, or
|
||||
** possibly be structured as a tuple of heterogeneous elements, or an array of homogeneous items.
|
||||
** + an output data buffer or several such buffers are always required
|
||||
** + (optionally) an input buffer or several such buffers need to be supplied
|
||||
** + (optionally) also a parameter value, tuple or array can be specified
|
||||
** - Supplying actual parameter values (if necessary); these are drawn from the invocation
|
||||
** of a further functor, designated as **parameter-functor**, and provided from within
|
||||
** the internal framework of the Lumiera application, either to deliver fixed parameter
|
||||
** settings configured by the user in the Session, or by evaluating _Parameter Automation,_
|
||||
** or simply to supply some technically necessary context information, most notably the
|
||||
** frame number of a source to retrieve.
|
||||
** - Preparing buffers filled with input data, in a suitable format, one for each distinct
|
||||
** item expected in the input data section of the processing-functor; filling these
|
||||
** input buffers requires the _recursive invocation_ of further Render Nodes...
|
||||
** - Allocating buffers for output data, sized and typed accordingly, likewise one for
|
||||
** each distinct item detailed in the output data argument of the processing-functor.
|
||||
**
|
||||
** @todo relies still on an [obsoleted implementation draft](\ref bufftable-obsolete.hpp)
|
||||
** @see engine::ProcNode
|
||||
** @see engine::StateProxy
|
||||
** @see engine::FeedManifold
|
||||
** @see nodewiring.hpp interface for building/wiring the nodes
|
||||
** The FeedManifold template, which (as mentioned above) is used by this standard implementation
|
||||
** of media processing in the form of the MediaWeavingPattern, is configured specifically for each
|
||||
** distinct signature of a processing-functor to match the implied structural requirements. If a
|
||||
** functor is output-only, no input buffer section is present; if it expects processing parameters,
|
||||
** storage for an appropriate data tuple is provided and a parameter-functor can be configured.
|
||||
** A clone-copy of the processing-functor itself is also stored as clone-copy alongside within
|
||||
** the FeedManifold, and thus placed into stack memory, where it is safe even during deeply nested
|
||||
** recursive invocation sequences, while rendering in general is performed massively in parallel.
|
||||
**
|
||||
** @warning as of 12/2024 first complete integration round of the Render engine ////////////////////////////TICKET #1367
|
||||
** In the end, the actual implementation code of the weaving pattern has to perform the connection
|
||||
** and integration between the »recursive weaving scheme« and the invocation structure implied by
|
||||
** the FeedManifold. It has to set off the recursive pull-invocation of predecessor ports, retrieve
|
||||
** the result data buffers from these and configure the FeedManifold with the \ref BuffHandle entries
|
||||
** retrieved from these recursive calls. Buffer handling in general is abstracted and codified thorough
|
||||
** the [Buffer Provider framework](\ref buffer-provider.hpp), which offers the means to allocate further
|
||||
** buffers and configure them into the FeedManifold for the output data. The »buffer handling protocol«
|
||||
** also requires to invoke BuffHandle::emit() at the point when result data can be assumed to be placed
|
||||
** into the buffer, and to release buffers not further required through a BuffHandle::release() call;
|
||||
** Notably this applies to the input buffers are completion of the processing-functor invocation, and
|
||||
** is required also for secondary (and in a way, superfluous) result buffers, which are sometimes
|
||||
** generated as a by-product of the processing function invocation, but are actually not passed
|
||||
** as output up the node invocation chain.
|
||||
**
|
||||
** @see feed-manifold.hpp
|
||||
** @see weaving-pattern-builder.hpp
|
||||
** @see \ref proc-node.hpp "Overview of Render Node structures"
|
||||
**
|
||||
** @warning WIP as of 12/2024 first complete integration round of the Render engine ////////////////////////////TICKET #1367
|
||||
**
|
||||
*/
|
||||
|
||||
|
|
@ -62,19 +146,16 @@
|
|||
#include "lib/several.hpp"
|
||||
//#include "lib/util-foreach.hpp"
|
||||
//#include "lib/iter-adapter.hpp"
|
||||
#include "lib/meta/function.hpp"
|
||||
//#include "lib/meta/function.hpp"
|
||||
//#include "lib/itertools.hpp"
|
||||
//#include "lib/util.hpp" ////////OOO wegen manifoldSiz<FUN>()
|
||||
|
||||
#include <utility>
|
||||
#include <array>
|
||||
//#include <stack>
|
||||
|
||||
|
||||
namespace steam {
|
||||
namespace engine {
|
||||
|
||||
using std::forward;
|
||||
using lib::Several;
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -849,18 +849,16 @@ __Note__: after detailed analysis, this use case was deemed beyond the scope of
|
|||
&rarr; AdviceRequirements
|
||||
</pre>
|
||||
</div>
|
||||
<div title="AllocationCluster" modifier="Ichthyostega" created="200810180031" modified="200810200127" tags="def img">
|
||||
<div title="AllocationCluster" modifier="Ichthyostega" created="200810180031" modified="202412220520" tags="def img" changecount="2">
|
||||
<pre>Memory management facility for the low-level model (render nodes network). The model is organised into temporal segments, which are considered to be structurally constant and uniform. The objects within each segment are strongly interconnected, and thus each segment is being built in a single build process and is replaced or released as a whole. __~AllocationCluster__ implements memory management to support this usage pattern. He owns a number of object families of various types.[>img[draw/AllocationCluster.png]]
|
||||
* [[processing nodes|ProcNode]] &mdash; probably with several subclasses (?)
|
||||
* [[wiring descriptors|WiringDescriptor]]
|
||||
* [[processing nodes|ProcNode]]
|
||||
* the Port elements embedded into these
|
||||
* the input/output descriptor arrays used by the latter
|
||||
|
||||
To Each of those families we can expect an initially undetermined (but rather large) number of individual objects, which can be expected to be allocated within a short timespan and which are to be released cleanly on destruction of the AllocationCluster.
|
||||
|
||||
''Problem of calling the dtors''
|
||||
Even if the low-level memory manager(s) may use raw storage, we require that the allocated object's destructors be called. This means keeping track at least of the number of objects allocated (without wasting too much memory for bookkeeping). Besides, as the objects are expected to be interconnected, it may be dangerous to destroy a given family of objects while another family of objects may rely on the former in its destructor. //If we happen do get into this situation,// we need to define a priority order on the types and assure the destruction sequence is respected.
|
||||
|
||||
&rarr; see MemoryManagement
|
||||
</pre>
|
||||
</div>
|
||||
<div title="Asset" modifier="Ichthyostega" created="200708100337" modified="202303272127" tags="def classes img" changecount="5">
|
||||
|
|
@ -1279,22 +1277,6 @@ __see also__
|
|||
&rarr; RenderMechanics for details on the buffer management within the node invocation for a single render step
|
||||
</pre>
|
||||
</div>
|
||||
<div title="BufferTable" modifier="Ichthyostega" created="201109172253" modified="202407030218" tags="def spec Rendering draft" changecount="1">
|
||||
<pre>{{red{⚠ In-depth rework underway as of 7/2024...}}}
|
||||
^^┅┅┅┅┅┅the following text is ''superseded''┅┅┅┅┅┅┅┅┅^^
|
||||
The invocation of individual [[render nodes|ProcNode]] uses an ''buffer table'' internal helper data structure to encapsulate technical details of the allocation, use, re-use and feeing of data buffers for the media calculations. Here, the management of the physical data buffers is delegated through a BufferProvider, which typically is implemented relying on the ''frame cache'' in the Vault. Yet some partially quite involved technical details need to be settled for each invocation: We need input buffers, maybe provided as external input, while in other cases to be filled by a recursive call. We need storage to prepare the (possibly automated) parameters, and finally we need a set of output buffers. All of these buffers and parameters need to be rearranged for invoking the (external) processing function, followed by releasing the input buffers and commiting the output buffers to be used as result.
|
||||
|
||||
Because there are several flavours of node wiring, the building blocks comprising such a node invocation will be combined depending on the circumstances. Performing all these various steps is indeed the core concern of the render node -- with the help of BufferTable to deal with the repetitive, tedious and technical details.
|
||||
|
||||
!requirements
|
||||
The layout of the buffer table will be planned beforehand for each invocation, allongside with planning the individual invocation jobs for the scheduler. At that point, a generic JobTicket for the whole timeline segment is available, describing the necessary operations in an abstract way, as determined by the preceeding planning phase. Jobs are prepared chunk wise, some time in advance (but not all jobs of at once). Jobs will be executed concurrently. Thus, buffer tables need to be created repeatedly and placed into a memory block accessed and owned exclusively by the individual job.
|
||||
* within the buffer table, we need a working area for the output handles, the input handles and the parameter descriptors
|
||||
* actually, these can be seen as pools holding handle objects which might even be re-used, especially for a chain of effects calculated in-place.
|
||||
* each of these pools is characterised by a common //buffer type,// represented as buffer descriptor
|
||||
* we need some way to integrate with the StateProxy, because some of the buffers need to be marked especially, e.g. as result
|
||||
* there should be convenience functions to release all pending buffers, forwarding the release operation to the individual handles
|
||||
</pre>
|
||||
</div>
|
||||
<div title="BuildFixture" modifier="Ichthyostega" created="201011282003" modified="201011290504" tags="Builder spec operational">
|
||||
<pre>//Building the fixture is actually at the core of the [[builder's operation|Builder]]//
|
||||
{{red{WIP as of 11/10}}} &rarr; see also the [[planning page|PlanningBuildFixture]]
|
||||
|
|
@ -1874,7 +1856,7 @@ The fake implementation should follow the general pattern planned for the Prolog
|
|||
|
||||
</pre>
|
||||
</div>
|
||||
<div title="CoreDevelopment" modifier="Ichthyostega" created="200706190056" modified="202304132321" tags="overview" changecount="14">
|
||||
<div title="CoreDevelopment" modifier="Ichthyostega" created="200706190056" modified="202412220516" tags="overview" changecount="15">
|
||||
<pre>The Render Engine is the part of the application doing the actual video calculations. Built on top of system level services and retrieving raw audio and video data through [[Lumiera's Vault Layer|Vault-Layer]], its operations are guided by the objects and parameters edited by the user in [[the session|Session]]. The //middle layer// of the Lumiera architecture, known as the Steam-Layer, spans the area between these two extremes, providing the the (abstract) edit operations available to the user, the representation of [["editable things"|MObjects]] and the translation of those into structures and facilities allowing to [[drive the rendering|Rendering]].
|
||||
|
||||
!About this wiki page
|
||||
|
|
@ -1895,7 +1877,7 @@ The system is ''open'' inasmuch every part mirrors the structure of correspondin
|
|||
&rarr; [[Overview Render Engine (low level model)|OverviewRenderEngine]]
|
||||
&rarr; BuildProcess and RenderProcess
|
||||
&rarr; how [[Automation]] works
|
||||
&rarr; [[Problems|ProblemsTodo]] to be solved and notable [[design decisions|DesignDecisions]]
|
||||
&rarr; notable [[design decisions|DesignDecisions]]
|
||||
&rarr; [[Concepts, Abstractions and Formalities|Concepts]]
|
||||
&rarr; [[Implementation Details|ImplementationDetails]] {{red{WIP}}}
|
||||
|
||||
|
|
@ -2342,7 +2324,7 @@ Ongoing [[Builder]] activity especially can remould the Segmentation on a copy o
|
|||
* this kind of information is available within the scheduling process ⟹ the [[Scheduler]] must support triggering on dependency events
|
||||
</pre>
|
||||
</div>
|
||||
<div title="FixtureStorage" modifier="Ichthyostega" created="201012140231" modified="201403071814" tags="Builder impl operational draft" changecount="22">
|
||||
<div title="FixtureStorage" modifier="Ichthyostega" created="201012140231" modified="202412220502" tags="Builder impl operational draft" changecount="23">
|
||||
<pre>The Fixture &rarr; [[data structure|FixtureDatastructure]] acts as umbrella to hook up the elements of the render engine's processing nodes network (LowLevelModel).
|
||||
Each segment within the [[Segmentation]] of any timeline serves as ''extent'' or unit of memory management: it is built up completely during the corresponding build process and becomes immutable thereafter, finally to be discarded as a whole when superseded by a modified version of that segment (new build process) -- but only after all related render processes (&rarr; CalcStream) are known to be terminated.
|
||||
|
||||
|
|
@ -2424,7 +2406,7 @@ The management of fixture storage has to deal with some distinct situations
|
|||
:as long as any of the //tainted// ~CalcStreams is still alive, all of the data structures held by the AllocationCluster of that segment need to stay around
|
||||
:* the DispatcherTables
|
||||
:* the JobTicket structure
|
||||
:* the [[processing nodes|ProcNode]] and accompanying WiringDescriptor records
|
||||
:* the [[processing nodes|ProcNode]] and accompanying Port elements
|
||||
|
||||
!!!conclusions for the implementation
|
||||
In the end, getting the memory management within Segmentation and Playback correct boils down into the following requirements
|
||||
|
|
@ -3887,10 +3869,9 @@ Besides routing to a global pipe, wiring plugs can also connect to the source po
|
|||
Finally, this example shows an ''automation'' data set controlling some parameter of an effect contained in one of the global pipes. From the effect's POV, the automation is simply a ParamProvider, i.e a function yielding a scalar value over time. The automation data set may be implemented as a bézier curve, or by a mathematical function (e.g. sine or fractal pseudo random) or by some captured and interpolated data values. Interestingly, in this example the automation data set has been placed relatively to the meta clip (albeit on another track), thus it will follow and adjust when the latter is moved.
|
||||
</pre>
|
||||
</div>
|
||||
<div title="ImplementationDetails" modifier="Ichthyostega" created="200708080322" modified="202303272225" tags="overview" changecount="14">
|
||||
<div title="ImplementationDetails" modifier="Ichthyostega" created="200708080322" modified="202412220520" tags="overview" changecount="15">
|
||||
<pre>This wiki page is the entry point to detail notes covering some technical decisions, details and problems encountered in the course of the years, while building the Lumiera application.
|
||||
|
||||
* [[Memory Management Issues|MemoryManagement]]
|
||||
* [[Creating and registering Assets|AssetCreation]]
|
||||
* [[Multichannel Media|MultichannelMedia]]
|
||||
* [[Editing Operations|EditingOperations]]
|
||||
|
|
@ -4516,39 +4497,6 @@ Because we deliberately won't make any asumptions about the implementation libra
|
|||
It would be possible to circumvent this problem by requiring all supported implementation libraries to be known at compile time, because then the actual media implementation type could be linked to a facade type by generic programming. Indeed, Lumiera follows this route with regards to the possible kinds of MObject or [[Asset]] &mdash; but to the contraty, for the problem in question here, being able to include support for a new media data type just by adding a plugin by far outweights the benefits of compile-time checked implementation type selection. So, as a consequence of this design decision we //note the possibility of the media file type discovery code to be misconfigured// and select the //wrong implementation library at runtime.// And thus the render engine needs to be prepared for the source reading node of any pipe to flounder completely, and protect the rest of the system accordingly
|
||||
</pre>
|
||||
</div>
|
||||
<div title="MemoryManagement" modifier="Ichthyostega" created="200708100225" modified="201812092254" tags="impl decision rewrite" changecount="3">
|
||||
<pre>Of course: Cinelerra currently leaks memory and crashes regularilly. For the newly written code, besides retaining the same level of performance, a main goal is to use methods and techniques known to support the writing of quality code. So, besides the MultithreadConsiderations, a solid strategy for managing the ownership of allocated memory blocks is necessary right from start.
|
||||
|
||||
!Problems
|
||||
# Memory management needs to work correct in a //fault tolerant environment//. That means that we need to be prepared to //handle on a non-local scale// some sorts of error conditions (without aborting the application). To be more precise: some error condition arises locally, which leads to a local abort and just the disabling/failing of some subsystem without affecting the application as a whole. This can happen on a regular base (e.g. rendering fails) and thus is __no excuse for leaking memory__
|
||||
# Some (not all) parts of the core application are non-deterministic. That means, we can't tie the memory management to any assumptions on behalf of the execution path
|
||||
|
||||
!C++ solution
|
||||
First of all -- this doesn't concern //every// allocation. It rather means there are certain //dangerous areas// which need to be identified. Anyhow, instead of carrying inherent complexities of the problem into the solution, we should rather look for common solution pattern(s) which help factoring out complexity.
|
||||
|
||||
For the case here in question this seems to be the __R__esource __A__llocation __I__s __I__nitialisation pattern (''RAII''). Which boils down to basically never using bare pointers when concerned with ownership. Client code allways gets to use a wrapper object, which cannot be obtained unless going through some well defined construction site. As an extension to the baisc RAII pattern, C++ allows us to build //smart wrapper objects//, thereby delegating any kind of de-registration or resource freeing automatically. This usage pattern doesn't necessarily imply (and in fact isn't limited to) just ref-counting.
|
||||
|
||||
!!usage scenarios
|
||||
# __existence is being used__: Objects just live for being referred to in a object network. In this case, use refcounting smart-pointers for every ref. (note: problem with cyclic refs)
|
||||
# __entity bound ownership__: Objects can be tied to some long living entity in the program, which holds the smart-pointer
|
||||
#* if the existence of these ref-holding entity can be //guaranteed// (as if by contract), then the other users can build a object network with conventional pointers
|
||||
#* otherwise, when the ref-holding entity //can disappear// in a regular program state, we need weak-refs and checking (because by our postulate the controlled resource needs to be destructed immediately, otherwise we would have the first case, existence == being used)
|
||||
|
||||
!!!dangerous uses
|
||||
* the render nodes &rarr; [[detail analysis|ManagementRenderNodes]] {{red{TODO}}}
|
||||
* the MObjects in the session &rarr; [[detail analysis|ManagementMObjects]] {{red{TODO}}}
|
||||
* Asset - MObject relationship. &rarr; [[detail analysis|ManagementAssetRelation]] {{red{TODO}}}
|
||||
|
||||
!!!rather harmless
|
||||
* Frames (buffers), because they belong to a given [[RenderProcess (=StateProxy)|StateProxy]] and are just passed in into the individual [[ProcNode]]s. This can be handled consistently with conventional methods.
|
||||
* each StateProxy belongs to one top-level call to the ~Controller-Facade
|
||||
* similar for the builder tools, which belong to a build process. Moreover, they are pooled and reused.
|
||||
* the [[sequences|Sequence]] and the defined [[assets|Asset]] belong together to one [[Session]]. If the Session is closed, this means a internal shutdown of the whole ProcLayer, i.e. closing of all GUI representations and terminating all render processes. If these calles are implemented as blocking operations, we can assert that as long as any GUI representation or any render process is running, there is a valid session and model.
|
||||
|
||||
!using Factories
|
||||
And, last but not least, doing large scale allocations is the job of the Vault. Exceptions being long-lived objects, like the session or the sequences, which are created once and don't bear the danger of causing memory pressure. Generally speaking, client code shouldn't issue "new" and "delete" when it comes in handy. Questions of setup and lifecycle should allways be delegated, typically through the usage of some [[factory|Factories]], which might return the product conveniently wrapped into a RAII style handle. Memory allocation is crucial for performance, and needs to be adapted to the actual platform -- which is impossible unless abstracted and treated as a separate concern.
|
||||
</pre>
|
||||
</div>
|
||||
<div title="MetaAsset" modifier="Ichthyostega" created="201012290320" modified="201808111603" tags="def" changecount="4">
|
||||
<pre>This category encompasses the various aspects of the way the application controls and manages its own behaviour. They are more related to the way the application behaves, as opposed to the way the edited data is structured and organised (which is the realm of [[structural assets|StructAsset]]) &rarr; {{red{Ticket #1156}}}
|
||||
* StreamType &rarr; a type system for describing and relating media data streams
|
||||
|
|
@ -4810,7 +4758,7 @@ Moreover, the design of coordinate matching and resolving incurs a structure sim
|
|||
In the most general case the render network may be just a DAG (not just a tree). Especially, multiple exit points may lead down to the same node, and following each of this possible paths the node may be at a different depth on each. This rules out a simple counter starting from the exit level, leaving us with the possibility of either employing a rather convoluted addressing scheme or using arbitrary ID numbers.{{red{...which is what we do for now}}}
|
||||
</pre>
|
||||
</div>
|
||||
<div title="NodeOperationProtocol" modifier="Ichthyostega" created="200806010251" modified="202407030216" tags="Rendering operational" changecount="2">
|
||||
<div title="NodeOperationProtocol" modifier="Ichthyostega" created="200806010251" modified="202412220501" tags="Rendering operational" changecount="3">
|
||||
<pre>{{red{⚠ In-depth rework underway as of 7/2024...}}}
|
||||
^^┅┅┅┅┅┅the following text is ''superseded''┅┅┅┅┅┅┅┅┅^^
|
||||
The [[nodes|ProcNode]] are wired to form a "Directed Acyclic Graph"; each node knows its predecessor(s), but not its successor(s). The RenderProcess is organized according to the ''pull principle'', thus we find an operation {{{pull()}}} at the core of this process. Meaning that there isn't a central entity to invoke nodes consecutively. Rather, the nodes themselves contain the detailed knowledge regarding prerequisites, so the calculation plan is worked out recursively. Yet still there are some prerequisite resources to be made available for any calculation to happen. So the actual calculation is broken down into atomic chunks of work, resulting in a 2-phase invocation whenever "pulling" a node. For this to work, we need the nodes to adhere to a specific protocol:
|
||||
|
|
@ -4836,7 +4784,6 @@ The [[nodes|ProcNode]] are wired to form a "Directed Acyclic Graph"; e
|
|||
{{red{Update 8/13 -- work on this part of the code base has stalled, but now the plain is to get back to this topic when coding down from the Player to the Engine interface and from there to the NodeInvocation. The design as outlined above was mostly coded in 2011, but never really tested or finished; you can expect some reworkings and simplifications, but basically this design looks OK}}}
|
||||
|
||||
some points to note:
|
||||
* the WiringDescriptor is {{{const}}} and precalculated while building (remember another thread may call in parallel)
|
||||
* when a node is "inplace-capable", input and output buffer may actually point to the same location
|
||||
* but there is no guarantee for this to happen, because the cache may be involved (and we can't overwrite the contents of a cache frame)
|
||||
* nodes in general may require N inputs and M output frames, which are expected to be processed in a single call
|
||||
|
|
@ -6189,7 +6136,7 @@ This is the core service provided by the player subsystem. The purpose is to cre
|
|||
:any details of this processing remain opaque for the clients; even the player subsystem just accesses the EngineFaçade
|
||||
</pre>
|
||||
</div>
|
||||
<div title="PlaybackVerticalSlice" creator="Ichthyostega" modifier="Ichthyostega" created="202303272236" modified="202406211422" tags="overview impl discuss draft" changecount="40">
|
||||
<div title="PlaybackVerticalSlice" creator="Ichthyostega" modifier="Ichthyostega" created="202303272236" modified="202412220549" tags="overview impl discuss draft" changecount="42">
|
||||
<pre>//Integration effort to promote the development of rendering, playback and video display in the GUI//
|
||||
This IntegrationSlice was started in {{red{2023}}} as [[Ticket #1221|https://issues.lumiera.org/ticket/1221]] to coordinate the completion and integration of various implementation facilities, planned, drafted and built during the last years; this effort marks the return of development focus to the lower layers (after years of focussed UI development) and will implement the asynchronous and time-bound rendering coordinated by the [[Scheduler]] in the [[Vault|Vault-Layer]]
|
||||
|
||||
|
|
@ -6225,10 +6172,12 @@ The Scheduler will be structured into two Layers, where the lower layer is imple
|
|||
__December.23__: building the Scheduler required time and dedication, including some related topics like a [[suitable memory management scheme|SchedulerMemory]], rework and modernisation of the [[#1279 thread handling framework|https://issues.lumiera.org/ticket/1279]], using a [[worker pool|SchedulerWorker]] and developing the [[foundation for load control|SchedulerLoadControl]]. This amounts to the creation of a considerable body of new code; some &rarr;[[load- and stress testing|SchedulerTest]] helps to establish &rarr;[[performance characteristics and traits|SchedulerBehaviour]].
|
||||
|
||||
__April.24__: after completing an extended round of performance tests for the new Scheduler, development focus is shifted now shifted upwards to the [[Render Node Network|ProcNode]], where Engine activity is carried out. This part was addressed at the very start of the project, and later again -- yet could never be completed, due to a lack of clear reference points and technical requirements. Hope to achieve a breakthrough rests on this integration effort now.
|
||||
__June.24__: assessment of the existing code indicated some parts not well suited to the expected usage. Notably the {{{AllocationCluster}}}, which is the custom allocator used by the render nodes network, was reworked and simplified. Moreover, a new custom container was developed, to serve as //link to connect the nodes.// Beyond that, in-depth review validated the existing design for the render nodes, while also implying some renaming and rearrangements
|
||||
* 🗘 establish a test setup for developing render node functionality
|
||||
* 🗘 build, connect and invoke some dummy render nodes directly in a test setup
|
||||
* ⌛ introduce a middle layer for linking the JobTicket to the actual invocation
|
||||
__June.24__: assessment of the existing code indicated some parts not well suited to the expected usage. Notably the {{{AllocationCluster}}}, which is the custom allocator used by the render nodes network, was reworked and simplified. Moreover, a new custom container was developed, to serve as //link to connect the nodes.// Beyond that, in-depth review validated the existing design for the render nodes, while also indicating a clear need to rearrange and re-orient the internal structure within an node invocation to be better aligned with the structure of the Application developed thus far...
|
||||
__December.24__: after an extended break (due to family-related obligations), a re-oriented concept for the Render Node invocation was developed in a prototyping setup. Assessment of results and further analysis leads to the conclusion that a more flexible invocation scheme -- and especially structured invocation parameters -- must be retro-fitted into the code developed thus far. Features of this kind can not be added later by decoration and extension -- rather, the data structures used directly within the invocation required considerable elaboration. This could be accomplished in the end, through a bold move to abandon array-style storage and switch over to strictly typed data tuples and a functor-based binding to the implementation library.
|
||||
* ✔ establish a test setup for developing render node functionality
|
||||
* ✔ build and connect some dummy render nodes directly in a test setup
|
||||
* 🗘 invoke render nodes stand-alone, without framework
|
||||
* ⌛ introduce a //placeholder link// to connect JobTicket to the actual invocation (without actual [[Fixture]])
|
||||
* ⌛ rework and complete the existing node invocation code
|
||||
|
||||
!Decisions
|
||||
|
|
@ -6331,49 +6280,6 @@ The backbone of the Lumiera UI is arranged such as to produce a data feed with s
|
|||
Run as part of the UiCoreServices, it is attached to the UI-Bus and listens to all ''state mark messages'' to distil the notion of relevant current state.
|
||||
On a basic level, the task is to group and store those messages in a way as to overwrite previous messages with new updates on the level of individual properties. Beyond that, there is the sensitivity to context, presentation perspective and work site, which means to impose an additional structure on this basic ''state snapshot'', to extract and replicate structured sets of state information.</pre>
|
||||
</div>
|
||||
<div title="ProblemsTodo" modifier="Ichthyostega" created="200708050524" modified="201810071724" tags="design discuss" changecount="9">
|
||||
<pre>Open issues, Things to be worked out, Problems still to be solved...
|
||||
|
||||
!!Parameter Handling
|
||||
The requirements are not quite clear; obviously Parameters are the foundation for getting automation right and for providing effect editing interfaces, so it seems to me we need some sort of introspection, i.e. Parameters need to be discovered, enumerated and described at runtime. (&rarr; see [[tag:automation|automation]])
|
||||
|
||||
''Automation Type'': Directly connected is the problem of handling the //type// of parameters sensible, including the value type of automation data. My first (somewhat naive) approach was to "make everything a double". But this soon leads into quite some of the same problems haunting the automation solution implemented in the current Cinelerra codebase. What makes the issue difficult is the fact we both need static diversity as well as dynamic flexibility. Usually, when combining hierarchies and templates, one has to be very careful; so I just note the problem down at the moment and will revisit it later, when I have a more clear understanding of the demands put onto the [[ProcNode]]s
|
||||
|
||||
!!Treatment of Time (points) and Intervals
|
||||
At the moment we have no clear picture what is needed and what problems we may face in that domain.
|
||||
From experience, mainly with other applications, we can draw the following conclusions
|
||||
* drift and rounding errors are dangerous, because time in our context usually is understood as a fixed grid (Frames, samples...)
|
||||
* fine grained time values easily get very large
|
||||
* Cinelerra currently uses the approach of simply counting natural values for each media type separately. In an environment mixing several different media types freely, this seems a bit too simplistic (because it actually brings in the danger of rounding errors, just think at drop frame TC)
|
||||
|
||||
!!Organizing of Output Channels
|
||||
How to handle the simultaneous rendering of several output streams (video, audio channels). Shall we treat the session as one entity containing different output channels, or should it rather be seen as a composite of several sub-sessions, each for only one output channel? This decision will be reflected in the overall structure of the network of render nodes: We could have a list of channel-output generating pipelines in each processor (for every segment), or we could have independently segmented lists of Processors for every output channel/type. The problem is, //it is not clear what approach to prefer at the moment// because we are just guessing.
|
||||
|
||||
!!Tracks, Channels, Layers
|
||||
Closely related to this is the not-so-obvious problem how to understand the common global structures found in most audio and video editing applications. Mostly, they stem from imitating hardware recording and editing solutions, thus easing the transition for professionals grown up with analogue hardware based media. But as digital media are the de-facto standard nowadays, we could rethink some of this accidental complexity introduced by sticking to the hardware tool metaphor.
|
||||
* is it really necessary to have fixed global tracks?
|
||||
* is it really helpful to feed "source tracks" into global processing busses/channels?
|
||||
Users accustomed with modern GUI applications typically expect that //everything is a object// and can be pulled around and manipulated individually. This seems natural at start, but raises the problem of providing a efficient workflow for handling larger projects and editing tasks. So, if we don't have a hard wired multitrack+bus architecture, we need some sort of templating to get the standard editing use case done efficiently.
|
||||
|
||||
!!Compound and Multiplicity
|
||||
Simple relations can be hard wired. But, on the contrary, it would be as naive to define a Clip as having a Video track and two audio tracks, as it would be naive to overlook the problem of holding video and corresponding audio together. And, moreover, the default case has to be processed in a //straight forward// fashion, with as few tests and decisions as possible. So, basically each component participating in getting the core processing done has to mirror the structure pattern of the other parts, so that processing can be done without testing and forking. But this leaves us with the problem where to put the initial knowledge about the structural pattern used for building up the compound structures and &mdash; especially &mdash; the problem how to treat different kinds of structural patterns, how to detect the pattern to be applied and how to treat multiple instances of the same structural pattern.
|
||||
|
||||
One example of this problem is the [[handling of multichannel media|MultichannelMedia]]. Following the above reasoning, we end with having a [["structural processing pattern"|ProcPatt]], typically one video stream with MPEG decoder and a pair of audio streams which need either to be routed to some "left" and "right" output pipes, or have to be passed through a panning filter accordingly. Now the problem is: //create a new instance of this structure for each new media, or detect which media to subsume under a existing pattern instance.//
|
||||
|
||||
!!Parallelism
|
||||
We need to work out guidelines for dealing with operations going on simultaneously. Certainly, this will divide the application in several different regions. As always, the primary goal is to avoid multithread problems altogether. Typically, this can be achieved by making matters explicit: externalizing state, make the processing subsystems stateless, queue and schedule tasks, use isolation layers.
|
||||
* the StateProxy is a key for the individual render processes state, which is managed in separate [[StateFrame]]s in the Vault. The [[processing network|ProcNode]] is stateless.
|
||||
* the [[Fixture]] provides an isolation layer between the render engine and the Session / high-level model
|
||||
* all EditingOperations are not threadsafe intentionally, because they are [[scheduled|ProcLayerScheduler]]
|
||||
|
||||
!!the perils of data representation
|
||||
In software development, there is a natural inclination to cast "reality" into data, the structure of which has to be nailed down first. Then, everyone might "access reality" and work on it. Doing so might sounds rational, natural, even self-evident and sound, yet as compelling as it might be, this approach is fundamentally flawed. It is known to work well only for small, "handsome" projects, where you clearly know up-front what you're up to: namely to get away, after being paid, before anyone realises the fact you've built something that looks nice but does not fit.
|
||||
So the challenge of any major undertaking in software construction is //not to build an universal model of truth.// Rather, we want to arrive at something that can be made to fit.
|
||||
Which can be remoulded, over and over again, without breaking down.
|
||||
|
||||
More specifically, we start building something, and while under way, our understanding sharpens, and we learn that actually we want something entirely different. Yet still we know what we need and we don't want just something arbitrary. There is a constant core in what we're headed at, and we need the ability to //settle matters.// We need a backbone to work against, a skeleton to support us with its firmness, while also offering joints and links, to be bent and remoulded without breakage. The distinctive idea to make such possible is the principle of ''Subsidiarity''. The links and joints between autonomous centres can be shaped to be in fact an exchange, a handover based on common understanding of the //specific matters to deal with,// at that given joint.
|
||||
</pre>
|
||||
</div>
|
||||
<div title="ProcAsset" modifier="Ichthyostega" created="200709221343" modified="201003140233" tags="def classes img">
|
||||
<pre>All Assets of kind asset::Proc represent //processing algorithms// in the bookkeeping view. They enable loading, browsing and maybe even parametrizing all the Effects, Plugins and Codecs available for use within the Lumiera Session.
|
||||
|
||||
|
|
@ -6391,7 +6297,7 @@ In 2018, the middle Layer was renamed into &rarr; Steam-Layer
|
|||
|
||||
</pre>
|
||||
</div>
|
||||
<div title="ProcNode" modifier="Ichthyostega" created="200706220409" modified="202410300053" tags="def spec" changecount="22">
|
||||
<div title="ProcNode" modifier="Ichthyostega" created="200706220409" modified="202412220556" tags="def spec" changecount="23">
|
||||
<pre>//A data processing node within the Render Engine.// Its key feature is the ability to pull a [[Frame]] of calculated data -- which may involve the invocation of //predecessor nodes// and the evaluation of [[Parameter Providers|ParamProvider]]. Together, those nodes form a »node-network«, a ''D''irected ''A''cyclic ''G''raph (DAG) known as the LowLevelModel, which is //explicit// to the degree that it can be //performed// to generate or process media. As such it is an internal structure and //compiled// by evaluating and interpreting the HighLevelModel within the [[Session]], which is the symbolic or logical representation of »the film edit« or »media product« established and created by the user.
|
||||
|
||||
So each node and all interconnections are //oriented.// Calculation starts from the output side and propagates to acquire the necessary inputs, thereby invoking the predecessor nodes (also known as »leads«). This recursive process leads to performing a sequence of calculations -- just to the degree necessary to deliver the desired result. And since the Render Node Network is generated by a compilation step, which is conducted repeatedly and on-demand by the part of the Lumiera application known as the [[Builder]], all node connectivity can be assumed to be adequate and fitting, each predecessor can be assumed to deliver precisely the necessary data into buffers with the correct format to be directly consumed for the current calculation step. The actual processing algorithm working on these buffers is provided by an ''external Media-processing-library'' -- the LowLevelModel can thus be seen as a pre-determined organisation and orchestration structure to enact externally provided processing functionality.
|
||||
|
|
@ -6419,13 +6325,12 @@ A {{{steam::engine::ProcNode}}} is an arrangement of descriptors, connectivity a
|
|||
^^┅┅┅┅┅┅the following text is ''superseded''┅┅┅┅┅┅┅┅┅^^
|
||||
|
||||
!! {{red{open questions}}}
|
||||
* how to address a node
|
||||
* how to type them
|
||||
* how to discover the number and type of the ports
|
||||
* how to discover the possible parameter ports
|
||||
* how to define and query for additional capabilities
|
||||
* how to build a descriptor for the actual operation bound into a node
|
||||
* how to query from within the [[Builder]] into the binding of a media-processing library to retrieve a concrete function binding
|
||||
* how to identify individual «slots» in a Port configuration in a way to enable dedicated configuration on a builder API.
|
||||
* are there any further operations on a »Render Node« -- beyond invoking it for rendering?
|
||||
* do we even need to retain {{{ProcNode}}} as an entity, or is a network of {{{Port}}} elements sufficient?
|
||||
|
||||
&rarr; see also the [[open design process draft|http://www.pipapo.org/pipawiki/Lumiera/DesignProcess/DesignRenderNodesInterface]]
|
||||
&rarr; see [[mem management|ManagementRenderNodes]]
|
||||
&rarr; see RenderProcess
|
||||
</pre>
|
||||
|
|
@ -6959,24 +6864,22 @@ Besides housing the planning pipeline, the RenderDrive is also a JobFunctor for
|
|||
&rarr; [[Player]]
|
||||
</pre>
|
||||
</div>
|
||||
<div title="RenderEntities" modifier="Ichthyostega" created="200706190715" modified="202410292339" tags="Rendering classes img" changecount="3">
|
||||
<div title="RenderEntities" modifier="Ichthyostega" created="200706190715" modified="202412220511" tags="Rendering classes" changecount="5">
|
||||
<pre>{{red{⚠ In-depth rework underway as of 10/2024...}}}
|
||||
^^┅┅┅┅┅┅the following text is ''superseded''┅┅┅┅┅┅┅┅┅^^
|
||||
The [[Render Engine|Rendering]] only carries out the low-level and performance critical tasks. All configuration and decision concerns are to be handled by [[Builder]] and [[Dispatcher|SteamDispatcher]]. While the actual connection of the Render Nodes can be highly complex, basically each Segment of the Timeline with uniform characteristics is handled by one Processor, which is a graph of [[Processing Nodes|ProcNode]] discharging into a ExitNode. The Render Engine Components as such are //stateless// themselves; for the actual calculations they are combined with a StateProxy object generated by and connected internally to the Controller {{red{really?? 2018}}}, while at the same time holding the Data Buffers (Frames) for the actual calculations.
|
||||
|
||||
{{red{🗲🗲 Warning: what follows is an early draft from 2009, obsoleted by actual plans and development as of 10/2024 🗲🗲}}}
|
||||
Currently the Render/Playback is beeing targetted for implementation; almost everything in this diagram will be implemented in a slightly differently way....
|
||||
[img[Entities comprising the Render Engine|uml/fig128389.png]]
|
||||
obsolet: drawing "Entities comprising the Render Engine"
|
||||
</pre>
|
||||
</div>
|
||||
<div title="RenderImplDetails" modifier="Ichthyostega" created="200806220211" modified="202407030217" tags="Rendering impl img" changecount="1">
|
||||
<div title="RenderImplDetails" modifier="Ichthyostega" created="200806220211" modified="202412220525" tags="Rendering impl" changecount="5">
|
||||
<pre>{{red{⚠ In-depth rework underway as of 7/2024...}}}
|
||||
^^┅┅┅┅┅┅the following text is ''superseded''┅┅┅┅┅┅┅┅┅^^
|
||||
Below are some notes regarding details of the actual implementation of the render process and processing node operation. In the description of the [[render node operation protocol|NodeOperationProtocol]] and the [[mechanics of the render process|RenderMechanics]], these details were left out deliberately.
|
||||
{{red{WIP as of 9/11 -- need to mention the planning phase more explicitly}}}
|
||||
|
||||
!Layered structure of State
|
||||
State can be seen as structured like an onion. All the [[StateAdapter]]s in one call stack are supposed to be within one layer: they all know of a "current state", which in turn is a StateProxy (and thus may refer yet to another state, maybe accros the network or in the backend or whatever). The actual {{{process()}}} function "within" the individual nodes just sees a single StateAdapter and thus can be thought to be a layer below.
|
||||
|
||||
!Buffer identification
|
||||
For the purpose of node operation, Buffers are identified by a [[buffer-handle|BuffHandle]], which contains both the actual buffer pointer and an internal indes and classification of the source providing the buffer; the latter information is used for deallocation. Especially for calling the {{{process()}}} function (which is supposed to be plain C) the node invocation needs to prepare and provide an array containing just the output and input buffer pointers. Typically, this //frame pointer array//&nbsp; is allocated on the call stack.
|
||||
|
|
@ -6987,16 +6890,10 @@ Some data processors simply require to work on multiple channels simultanously,
|
|||
Closely related to this is the problem how to number and identify nodes and thus to be able to find calculated frames in cache (&rarr; [[here|NodeFrameNumbering]])
|
||||
|
||||
!Configuration of the processing nodes
|
||||
[>img[uml/fig132357.png]]
|
||||
Every node is actually decomposed into three parts
|
||||
* an interface container of a ProcNode subclass
|
||||
* an {{{const}}} WiringDescriptor, which is actually parametrized to a subtype encoding details of how to carry out the intended operation
|
||||
* the Invocation state created on the stack for each {{{pull()}}} call. It is comprised of references to an StateAdapter object and the current overall process state, the WiringDescriptor, and finally a table of suitable buffer handles
|
||||
Thus, the outer container can be changed polymorphically to support the different kinds of nodes (large-scale view). The actual wiring of the nodes is contained in the WiringDescriptor, including the {{{process()}}} function pointer. Additionally, this WiringDescriptor knows the actual type of the operation Strategy, and this actual type has been chosen by the builder such as to select details of the desired operation of this node, for example caching / no caching or maybe ~OpenGL rendering or the special case of a node pulling directly from a source reader. Most of this configuration is done by selecting the right template specialisation within the builder; thus in the critical path most of the calls can be inlined
|
||||
|
||||
!!!! composing the actual operation Strategy
|
||||
As shown in the class diagram to the right, the actual implementation is assembled by chaining together the various policy classes governing parts of the node operation, like Caching, in-Place calculation capability, etc. (&rarr; see [[here|WiringDescriptor]] for details). The rationale is that the variable part of the Invocation data is allocated at runtime directly on the stack, while a precisely tailored call sequence for "calculating the predecessor nodes" can be defined out of a bunch of simple building blocks. This helps avoiding "spaghetti code", which would be especially dangerous and difficult to get right because of the large number of different execution paths. Additionally, a nice side effect of this implementation technique is that a good deal of the implementation is eligible to inlining.
|
||||
We //do employ//&nbsp; some virtual calls for the buffer management in order to avoid coupling the policy classes to the actual number of in/out buffers. (As of 6/2008, this is mainly a precaution to be able to control the number of generated template instances. If we ever get in the region of several hundred individual specialisations, we'd need to separate out further variable parts to be invoked through virtual functions.)
|
||||
* an actual Port, which is parametrized to a subtype encoding details related to the integration with a media handling Library
|
||||
* the Invocation state created on the stack for each {{{pull()}}} call. Largely, this entails data related to the actual invocation -- however, the input buffers are collected through recursive calls an can thus stay around for an extended time...
|
||||
|
||||
!Rules for buffer allocation and freeing
|
||||
* only output buffers are allocated. It is //never necessary//&nbsp; to allocate input buffers!
|
||||
|
|
@ -7010,13 +6907,13 @@ We //do employ//&nbsp; some virtual calls for the buffer management in order
|
|||
@@clear(right):display(block):@@
|
||||
</pre>
|
||||
</div>
|
||||
<div title="RenderJob" modifier="Ichthyostega" created="201202162156" modified="201305310017" tags="spec Rendering" changecount="1">
|
||||
<div title="RenderJob" modifier="Ichthyostega" created="201202162156" modified="202412220514" tags="spec Rendering" changecount="2">
|
||||
<pre>An unit of operation, to be [[scheduled|Scheduler]] for calculating media frame data just in time.
|
||||
Within each CalcStream, render jobs are produced by the associated FrameDispatcher, based on the corresponding JobTicket used as blue print (execution plan).
|
||||
|
||||
!Anatomy of a render job
|
||||
Basically, each render job is a //closure// -- hiding all the prepared, extended execution context and allowing the scheduler to trigger the job as a simple function.
|
||||
When activated, by virtue of this closure, the concrete ''node invocation'' is constructed, which is a private and safe execution environment for the actual frame data calculations. This (mostly stack based) environment embodies a StateProxy, acting as communication hub for accessing anything possibly stateful within the larger scope of the currently ongoing render process. The node invocation sequence is what actually implements the ''pulling of data'': on exit, all cacluated data is expected to be available in the output buffers. Typically (but not necessarily) each node embodies a ''calculation function'', holding the actual data processing algorithm.
|
||||
When activated, by virtue of this closure, the concrete ''node invocation'' is constructed, which is a private and safe execution environment for the actual frame data calculations. The node invocation sequence is what actually implements the ''pulling of data'': on exit, all cacluated data is expected to be available in the output buffers. Typically (but not necessarily) each node embodies a ''calculation function'', holding the actual data processing algorithm.
|
||||
|
||||
!{{red{open questions 2/12}}}
|
||||
* what are the job's actual parameters?
|
||||
|
|
@ -7034,7 +6931,7 @@ Prerequisite data for the media calculations can be considered just part of that
|
|||
* all it needs to know is the ''effective nominal time'' and an ''invocation instance ID''
|
||||
</pre>
|
||||
</div>
|
||||
<div title="RenderMechanics" modifier="Ichthyostega" created="200806030230" modified="202407030217" tags="Rendering operational impl img" changecount="1">
|
||||
<div title="RenderMechanics" modifier="Ichthyostega" created="200806030230" modified="202412220529" tags="Rendering operational" changecount="4">
|
||||
<pre>{{red{⚠ In-depth rework underway as of 7/2024...}}}
|
||||
^^┅┅┅┅┅┅the following text is ''superseded''┅┅┅┅┅┅┅┅┅^^
|
||||
While the render process, with respect to the dependencies, the builder and the processing function is sufficiently characterized by referring to the ''pull principle'' and by defining a [[protocol|NodeOperationProtocol]] each node has to adhere to &mdash; for actually get it coded we have to care for some important details, especially //how to manage the buffers.// It may well be that the length of the code path necessary to invoke the individual processing functions is finally not so important, compared with the time spent at the inner pixel loop within these functions. But my guess is (as of 5/08), that the overall number of data moving and copying operations //will be//&nbsp; of importance.
|
||||
|
|
@ -7048,40 +6945,20 @@ While the render process, with respect to the dependencies, the builder and the
|
|||
On the other hand, the processing function within the individual node needs to be shielded from these complexities. It can expect to get just //N~~I~~// input buffers and //N~~O~~// output buffers of required type. And, moreover, as the decision how to organize the buffers certainly depends on non-local circumstances, it should be preconfigured while building.
|
||||
|
||||
!data flow
|
||||
[>img[uml/fig131973.png]]
|
||||
Not everything can be preconfigured though. The pull principle opens the possibility for the node to decide on a per call base what predecessor(s) to pull (if any). This decision may rely on automation parameters, which thus need to be accessible prior to requesting the buffer(s). Additionally, in a later version we plan to have the node network calculate some control values for adjusting the cache and backend timings &mdash; and of course at some point we'll want to utilize the GPU, resulting in the need to feed data from our processing buffers into some texture representation.
|
||||
|
||||
!buffer management
|
||||
{{red{NOTE 9/11: the following is partially obsolete and needs to be rewritten}}} &rarr; see the BufferTable for details regarding new buffer management...
|
||||
|
||||
Besides the StateProxy representing the actual render process and holding a couple of buffer (refs), we employ a lightweight adapter object in between. It is used //for a single {{{pull()}}}-call// &mdash; mapping the actual buffers to the input and output port numbers of the processing node and for dealing with the cache calls. While the StateProxy manages a pool of frame buffers, this interspersed adapter allows us to either use a buffer retrieved from the cache as an input, possibly use a new buffer located within the cache as output, or (in case no caching happens) to just use the same buffer as input and output for "in-place"-processing. The idea is that most of the configuration of this adapter object is prepared in the wiring step while building the node network.
|
||||
...
|
||||
|
||||
The usage patern of the buffers can be stack-like when processing nodes require multiple input buffers. In the standard case, which also is the simplest case, a pair of buffers (or a single buffer for "in-place" capable nodes) suffices to calculate a whole chain of nodes. But &mdash; as the recursive descent means depth-first processing &mdash; in case multiple input buffers are needed, we may encounter a situation where some of these input buffers already contain processed data, while we have to descend into yet another predecessor node chain to pull the data for the remaining buffers. Care has to be taken //to allocate the buffers as late as possible,// otherwise we could end up holding onto a buffer almost for each node in the network. Effectively this translates into the rule to allocate output buffers only after all input buffers are ready and filled with data; thus we shouldn't allocate buffers when //entering// the recursive call to the predecessor(s), rather we have to wait until we are about to return from the downcall chain.
|
||||
Besides, these considerations also show we need a means of passing on the current buffer usage pattern while calling down. This usage pattern not only includes a record of what buffers are occupied, but also the intended use of these occupied buffers, especially if they can be modified in-place, and at which point they may be released and reused.
|
||||
__note__: this process outlined here and below is still an simplification. The actual implementation has some additional [[details to care for|RenderImplDetails]]
|
||||
|
||||
!!Example: calculating a 3 node chain
|
||||
# Caller invokes calculation by pulling from exit node, providing the top-level StateProxy
|
||||
# node1 (exit node) builds StateAdapter and calls retrieve() on it to get the desired output result
|
||||
# this StateAdapter (ad1) knows he could get the result from Cache, so he tries, but it's a miss
|
||||
# thus he pulls from the predecessor node2 according to the [[input descriptor|ProcNodeInputDescriptor]] of node1
|
||||
# node2 builds its StateAdapter and calls retrieve()
|
||||
# but because StateAdapter (ad2) is configured to directly forward the call down (no caching), it pulls from node3
|
||||
# node3 builds its StateAdapter and calls retrieve()
|
||||
# this StateAdapter (ad3) is configured to look into the Cache...
|
||||
# this time producing a Cache hit
|
||||
# now StateAdapter ad2 has input data, but needs a output buffer location, which re requests from its //parent state// (ad1)
|
||||
# and, because ad1 is configured for Caching and is "in-place" capable, it's clear that this output buffer will be located within the cache
|
||||
# thus the allocation request is forwarded to the cache, which provides a new "slot"
|
||||
# now node2 has both a valid input and a usable output buffer, thus the process function can be invoked
|
||||
# and after the result has been rendered into the output buffer, the input is no longer needed
|
||||
# and can be "unlocked" in the Cache
|
||||
# now the input data for node1 is available, and as node1 is in-place-capable, no further buffer allocation is necessary prior to calculating
|
||||
# the finished result is now in the buffer (which happens to be also the input buffer and is actually located within the Cache)
|
||||
# thus it can be marked as ready for the Cache, which may now provide it to other processes (but isn't allowed to overwrite it)
|
||||
# finally, when the caller is done with the data, it signalles this to the top-level State object
|
||||
# which forwards this information to the cache, which in turn may now do with the released Buffer as he sees fit.
|
||||
[img[uml/fig132229.png]]
|
||||
{{red{TODO 12/2024}}} still relevant to discuss such an example in-detail? what would be the point to drive home??
|
||||
|
||||
|
||||
@@clear(right):display(block):@@
|
||||
__see also__
|
||||
|
|
@ -7178,7 +7055,7 @@ An Activity is //performed// by invoking its {{{activate(now, ctx)}}} function -
|
|||
In a similar vein, also ''dependency notifications'' need to happen decoupled from the activity chain from which they originate; thus the Post-mechanism is also used for dispatching notifications. Yet notifications are to be treated specially, since they are directed towards a receiver, which in the standard case is a {{{GATE}}}-Activity and will respond by //decrementing its internal latch.// Consequently, notifications will be sent through the ''λ-post'' -- which operationally re-schedules a continuation as a follow-up job. Receiving such a notification may cause the Gate to become opened; in this case the trigger leads to //activation of the chain// hooked behind the Gate, which at some point typically enters into another calculation job. Otherwise, if the latch (in the Gate) is already zero (or the deadline has passed), nothing happens. Thus the implementation of state transition logic ensures the chain behind a Gate can only be //activated once.//
|
||||
</pre>
|
||||
</div>
|
||||
<div title="RenderProcess" modifier="Ichthyostega" created="200706190705" modified="202411010046" tags="Rendering operational" changecount="20">
|
||||
<div title="RenderProcess" modifier="Ichthyostega" created="200706190705" modified="202412220509" tags="Rendering operational" changecount="21">
|
||||
<pre>At a high level, the Render Process is what „runs“ a playback or render. Using the EngineFaçade, the [[Player]] creates a descriptor for such a process, which notably defines a [[»calculation stream«|CalcStream]] for each individual //data feed// to be produced. To actually implement such an //ongoing stream of timed calculations,// a series of data frames must be produced, for which some source data has to be loaded and then individual calculations will be scheduled to work on this data and deliver results within a well defined time window for each frame. Thus, on the implementation level, a {{{CalcStream}}} comprises a pipeline to define [[render jobs|RenderJob]], and a self-repeating re-scheduling mechanism to repeatedly plan and dispatch a //chunk of render jobs// to the [[Scheduler]], which cares to invoke the individual jobs in due time.
|
||||
|
||||
This leads to a even more detailed description at implementation level of the ''render processing''. Within the [[Session]], the user has defined the »edit« or the definition of the media product as a collection of media elements placed and arranged into a [[Timeline]]. A repeatedly-running, demand-driven, compiler-like process (in Lumiera known as [[the Builder|Builder]]) consolidates this [[»high-level definition«|HighLevelModel]] into a [[Fixture]] and a [[network of Render Nodes|LowLevelModel]] directly attached below. The Fixture hereby defines a [[segment for each part of the timeline|Segmentation]], which can be represented as a distinct and non-changing topology of connected render nodes. So each segment spans a time range, quantised into a range of frames -- and the node network attached below this segment is capable of producing media data for each frame within definition range, when given the actual frame number, and some designation of the actual data feed required at that point. Yet it depends on the circumstances what this »data feed« //actually is;// as a rule, anything which can be produced and consumed as compound will be represented as //a single feed.// The typical video will thus comprise a video feed and a stereo sound feed, while another setup may require to deliver individual sound feeds for the left and right channel, or whatever channel layout the sound system has, and it may require two distinct beamer feeds for the two channels of stereoscopic video. However -- as a general rule of architecture -- the Lumiera Render Engine is tasked to perform //all of the processing work,// up to and including any adaptation step required to reach the desired final result. Thus, for rendering into a media container, only a single feed is required, which can be drawn from an encoder node, which in turn consumes several data feeds for its constituents.
|
||||
|
|
@ -7192,8 +7069,6 @@ To summarise this break-down of the rendering process defined thus far, the [[Sc
|
|||
{{red{⚠ In-depth rework underway as of 7/2024...}}}
|
||||
^^┅┅┅┅┅┅the following text is ''superseded''┅┅┅┅┅┅┅┅┅^^
|
||||
For each segment (of the effective timeline), there is a Processor holding the exit node(s) of a processing network, which is a "Directed Acyclic Graph" of small, preconfigured, stateless [[processing nodes|ProcNode]]. This network is operated according to the ''pull principle'', meaning that the rendering is just initiated by "pulling" output from the exit node, causing a cascade of recursive downcalls or prerequisite calculations to be scheduled as individual [[jobs|RenderJob]]. Each node knows its predecessor(s), thus the necessary input can be pulled from there. Consequently, there is no centralized "engine object" which may invoke nodes iteratively or table driven &mdash; rather, the rendering can be seen as a passive service provided for the Vault, which may pull from the exit nodes at any time, in any order (?), and possibly multithreaded.
|
||||
All State necessary for a given calculation process is encapsulated and accessible by a StateProxy object, which can be seen as the representation of "the process". At the same time, this proxy provides the buffers holding data to be processed and acts as a gateway to the Vault to handle the communication with the Cache. In addition to this //top-level State,// each calculation step includes a small [[state adapter object|StateAdapter]] (stack allocated), which is pre-configured by the builder and serves the purpose to isolate the processing function from the detals of buffer management.
|
||||
|
||||
|
||||
__see also__
|
||||
&rarr; the [[Entities involved in Rendering|RenderEntities]]
|
||||
|
|
@ -8185,20 +8060,6 @@ Shutdown is initiated by sending a message to the dispatcher loop. This causes t
|
|||
<div title="SplashScreen" modifier="just me" created="200706220430">
|
||||
<pre>{{red{killme}}}</pre>
|
||||
</div>
|
||||
<div title="StateAdapter" modifier="Ichthyostega" created="200806261912" modified="202405062148" tags="Rendering impl def" changecount="2">
|
||||
<pre>A small (in terms of storage) and specifically configured StateProxy object which is created on the stack {{red{Really on the stack? 9/11}}} for each individual {{{pull()}}} call. It is part of the invocation state of such a call and participates in the buffer management. Thus, in a calldown sequence of {{{pull()}}} calls we get a corresponding sequence of "parent" states. At each level, the &rarr; WiringDescriptor of the respective node defines a Strategy how the call is passed on.
|
||||
|
||||
{{red{WIP 4/2024: the interface will be called {{{StateClosure}}} -- not sure to what degree there will be different implementations ...
|
||||
right now about to work towards a first integration of render node invocation}}}</pre>
|
||||
</div>
|
||||
<div title="StateProxy" modifier="Ichthyostega" created="200706220352" modified="201107082334" tags="def">
|
||||
<pre>An Object representing a //Render Process// and containing associated state information.
|
||||
* it is created in the Player subsystem while initiating the RenderProcess
|
||||
* it is passed on to the generated Render Engine, which in turn passes it down to the individual Processors
|
||||
* moreover, it contains methods to communicate with other state relevant parts of the system, thereby shielding the rendering code from any complexities of state synchronisation and management if necessary. (thus the name Proxy)
|
||||
* in a future version, it may also encapsulate the communication in a distributed render farm
|
||||
</pre>
|
||||
</div>
|
||||
<div title="Steam-Layer" creator="Ichthyostega" modifier="Ichthyostega" created="201812092252" modified="201812092305" tags="def" changecount="3">
|
||||
<pre>The architecture of the Lumiera application separates functionality into three Layers: __Stage__, __Steam__ and __Vault__.
|
||||
|
||||
|
|
@ -8440,7 +8301,7 @@ When deciding if a connection can be made, we can build up the type information
|
|||
My Idea was to use [[type implementation constraints|StreamTypeImplConstraint]] for this, which are a special kind of ~ImplType
|
||||
</pre>
|
||||
</div>
|
||||
<div title="StrongSeparation" modifier="Ichthyostega" created="200706220452" modified="202304140027" tags="design" changecount="1">
|
||||
<div title="StrongSeparation" modifier="Ichthyostega" created="200706220452" modified="202412220508" tags="design" changecount="2">
|
||||
<pre>This design lays great emphasis on separating all those components and subsystems, which are considered not to have a //natural link// of their underlying concepts. This often means putting some additional constraints on the implementation, so basically we need to rely on the actual implementation to live up to this goal. In many cases it may seem to be more natural to "just access the necessary information". But on the long run this coupling of not-directly related components makes the whole codebase monolithic and introduces lots of //accidental complexity.//
|
||||
|
||||
Instead, we should try to just connect the various subsystems via Interfaces and &mdash; instead of just using some information, rather use some service to be located on an Interface to query other components for this information. The best approach of course is always to avoid the dependency altogether.
|
||||
|
|
@ -8450,7 +8311,6 @@ Instead, we should try to just connect the various subsystems via Interfaces and
|
|||
* same holds true for the Builder: it just uses the same Interfaces. The actual coupling is done rather //by type//, i.e. the Builder relies on an arrangement of MObjects to exist and picks up their properties through a small number of generic overloaded methods -- the session is interpreted and translated.
|
||||
* the Builder itself is a separation layer. Neither do the Objects in the session access directly [[Render Nodes|ProcNode]], nor do the latter call back into the session. Both connections seem to be necessary at first sight, but both can be avoided by using the Builder Pattern
|
||||
* another separation exists between the Render Engine and the individual Nodes: The Render Engine doesn't need to know the details of the data types processed by the Nodes. It relies on the Builder having done the correct connections and just pulls out the calculated results. If there needs to be additional control information to be passed, then I would prefer to do a direct wiring of separate control connections to specialized components, which in turn could instruct the controller to change the rendering process.
|
||||
* to shield the rendering code of all complexities of thread communication and synchronization, we use the StateProxy
|
||||
</pre>
|
||||
</div>
|
||||
<div title="StructAsset" modifier="Ichthyostega" created="200709221353" modified="201505310120" tags="def classes img" changecount="5">
|
||||
|
|
@ -11154,7 +11014,7 @@ generally speaking, visitors are preferable when the underlying element type hie
|
|||
To see an simple example of our "visiting tool", have a look at {{{tests/components/common/visitingtooltest.cpp}}}
|
||||
</pre>
|
||||
</div>
|
||||
<div title="WalkThrough" modifier="Ichthyostega" created="200706210625" modified="200805300124" tags="overview">
|
||||
<div title="WalkThrough" modifier="Ichthyostega" created="200706210625" modified="202412220507" tags="overview" changecount="2">
|
||||
<pre>The Intention of this text is to help you understanding the design and to show some notable details.
|
||||
|
||||
!!!!Starting Point
|
||||
|
|
@ -11176,7 +11036,6 @@ This design strives to build each level and subsystem around some central concep
|
|||
* on the lower end of the builder, everything is organized around the Concept of a ProcNode, which enables us to //pull// one (freely addressable) Frame of calculated data. Further, the ProcNode has the ability to be wired with other nodes and [[Parameter Providers|ParamProvider]]
|
||||
* the various types of data to be processed are abstracted away under the notion of a [[Frame]]. Basically, a Frame is an Buffer containing an Array of raw data and it can be located by some generic scheme, including (at least) the absolute starting time (and probably some type or channel id).
|
||||
* All sorts of (target domain) [[parameters|Parameter]] are treated uniformly. There is a distinction between Parameters (which //could// be variable) and Configuration (which is considered to be fixed). In this context, [[Automation]] just appears as a special kind of ParamProvider.
|
||||
* and finally, the calculation //process// together with its current state is represented by a StateProxy. I call this a "proxy", because it should encapsulate and hide all tedious details of communication, be it even asynchronous communication with some Controller or Dispatcher running in another Thread. In order to maintain a view on the current state of the render process, it could eventually be necessary to register as an observer somewhere or to send notifications to other parts of the system.
|
||||
|
||||
!!!!Handling Diversity
|
||||
An important goal of this approach is to be able to push down the treatment of variations and special cases. We don't need to know what kind of Placement links one MObject to another, because it is sufficient for us to get an ExplicitPlacement. The Render Engine doesn't need to know if it is pulling audio Frames or video Frames or GOPs or OpenGL textures. It simply relies on the Builder wiring together the correct node types. And the Builder in turn does so by using some overloaded function of an iterator or visitor. At many instances, instead of doing decisions in-code or using hard wired defaults, a system of [[configuration rules|ConfigRules]] is invoked to get a suitable default as a solution (and, as a plus, this provides points of customisation for advanced users). At engine level, there is no need for the video processing node to test for the colormodel on every screen line, because the Builder has already wired up the fitting implementation routine. All of this helps reducing complexity and quite some misconceptions can be detected already by the compiler.
|
||||
|
|
@ -11185,10 +11044,10 @@ An important goal of this approach is to be able to push down the treatment of v
|
|||
In case it's not already clear: we don't have "the" Render Engine, rather we construct a Render Engine for each structurally differing part of the timeline. (please relate this to the current Cinelerra code base, which constructs and builds up the render pipeline for each frame separately). No need to call back from within the pipeline to find out if a given plugin is enabled or to see if there are any automation keyframes. We don't need to pose any constraints on the structuring of the objects in the session, besides the requirement to get an ExplicitPlacement for each. We could even loosen the use of the common metaphor of placing media sequences on fixed tracks, if we want to get at a more advanced GUI at some point in the future.
|
||||
|
||||
!!!!Stateless Subsystems
|
||||
The &raquo;current setup&laquo; of the objects in the session is sort of a global state. Same holds true for the Controller, as the Engine can be at playback, it can run a background render or scrub single frames. But the whole complicated subsystem of the Builder and one given Render Engine configuration can be made ''stateless''. As a benefit of this we can run this subsystems multi-threaded without the need of any precautions (locking, synchronizing). Because all state information is just passed in as function parameters and lives in local variables on the stack, or is contained in the StateProxy which represents the given render //process// and is passed down as function parameter as well. (note: I use the term "stateless" in the usual, slightly relaxed manner; of course there are some configuration values contained in instance variables of the objects carrying out the calculations, but this values are considered to be constant over the course of the object usage).
|
||||
The &raquo;current setup&laquo; of the objects in the session is sort of a global state. Same holds true for the Controller, as the Engine can be at playback, it can run a background render or scrub single frames. But the whole complicated subsystem of the Builder and one given Render Engine configuration can be made ''stateless''. As a benefit of this we can run this subsystems multi-threaded without the need of any precautions (locking, synchronizing). Because all state information is just passed in as function parameters and lives in local variables on the stack. (note: I use the term "stateless" in the usual, slightly relaxed manner; of course there are some configuration values contained in instance variables of the objects carrying out the calculations, but this values are considered to be constant over the course of the object usage).
|
||||
</pre>
|
||||
</div>
|
||||
<div title="Wiring" modifier="Ichthyostega" created="201009250223" modified="201201282126" tags="Concepts Model design draft">
|
||||
<div title="Wiring" modifier="Ichthyostega" created="201009250223" modified="202412220500" tags="Concepts Model design draft" changecount="1">
|
||||
<pre>within Lumiera's ~Steam-Layer, on the conceptual level there are two kinds of connections: data streams and control connections. The wiring deals with how to define and control these connections, how they are to be detected by the builder and finally implemented by links in the render engine.
|
||||
&rarr; see OutputManagement
|
||||
&rarr; see OutputDesignation
|
||||
|
|
@ -11209,7 +11068,7 @@ The Processing of such a wiring request drives the actual connection step. It is
|
|||
* the processing pattern is queried to fit the mould, the stream type and additional placement specifications ({{red{TODO 11/10: work out details}}})
|
||||
* the stream type system itself contributes in determining possible connections and conversions, introducing further processing patterns
|
||||
|
||||
The final result, within the ''Render Engine'', is a network of processing nodes. Each of this nodes holds a WiringDescriptor, created as a result of the wiring operation detailed above. This descriptor lists the predecessors, and (in somewhat encoded form) the other details necessary for the processing node to respond properly at the engine's calculation requests (read: those details are implementation bound and can be expeted to be made to fit)
|
||||
The final result, within the ''Render Engine'', is a network of processing nodes. Each of this nodes holds a ... {{red{TODO 12/24: almost all names and structures have been reworked...}}}....
|
||||
|
||||
On a more global level, this LowLevelModel within the engine exposes a number of [[exit nodes|ExitNode]], each corresponding to a ModelPort, thus being a possible source to be handled by the OutputManager, which is responsible for mapping and connecting nominal outputs (the model ports) to actual output sinks (external connections and viewer windows). A model port isn't necessarily an absolute endpoint of connected processing nodes &mdash; it may as well reside in the middle of the network, e.g. as a ProbePoint. Besides the core engine network, there is also an [[output network|OutputNetwork]], built and extended on demand to prepare generated data for the purpose of presentation. This ViewConnection might necesitate scaling or interpolating video for a viewer, adding overlays with control information produced by plugins, or rendering and downmixing multichannel sound. By employing this output network, the same techniques used to control wiring of the main path, can be extended to control this output preparation step. ({{red{WIP 11/10}}} some important details are to be settled here, like how to control semi-automatic adaptation steps. But that is partially true also for the main network: for example, we don't know where to locate and control the faders generated as a consequence of building a summation line)
|
||||
|
||||
|
|
@ -11226,35 +11085,6 @@ On a more global level, this LowLevelModel within the engine exposes a number of
|
|||
!Control connections
|
||||
</pre>
|
||||
</div>
|
||||
<div title="WiringDescriptor" modifier="Ichthyostega" created="200807132338" modified="202407030217" tags="Rendering operational impl spec" changecount="1">
|
||||
<pre>{{red{⚠ In-depth rework underway as of 7/2024...}}}
|
||||
^^┅┅┅┅┅┅the following text is ''superseded''┅┅┅┅┅┅┅┅┅^^
|
||||
Each [[processing node|ProcNode]] contains a stateless ({{{const}}}) descriptor detailing the inputs, outputs and predecessors. Moreover, this descriptor contains the configuration of the call sequence yielding the &raquo;data pulled from predecessor(s)&laquo;. The actual type of this object is composed out of several building blocks (policy classes) and placed by the builder as a template parameter on the WiringDescriptor of the individual ProcNode. This happens in the WiringFactory in file {{{nodewiring.cpp}}}, which consequently contains all the possible combinations (pre)generated at compile time.
|
||||
|
||||
!building blocks
|
||||
* ''Caching'': whether the result frames of this processing step will be communicated to the Cache and thus could be fetched from there instead of actually calculating them.
|
||||
* ''Process'': whether this node does any calculations on it's own or just pulls from a source
|
||||
* ''Inplace'': whether this node is capable of processing the result "in-place", thereby overwriting the input buffer
|
||||
* ''Multiout'': whether this node produces multiple output channels/frames in one processing step
|
||||
|
||||
!!implementation
|
||||
!!!!Caching
|
||||
When a node participates in ''caching'', a result frame may be pulled immediately from cache instead of calculating it. Moreover, //any output buffer//&nbsp; of this node will be allocated //within the cache.// Consequently, caching interferes with the ability of the next node to calculate "in-Place". In the other case, when ''not using the cache'', the {{{pull()}}} call immediately starts out with calling down to the predecessor nodes, and the allocation of output buffer(s) is always delegated to the parent state (i.e. the StateProxy pulling results from this node).
|
||||
|
||||
Generally, buffer allocation requests from predecessor nodes (while being pulled by this node) will either be satisfied by using the "current state", or treated as if they were our own output buffers when this node is in-Place capable.
|
||||
|
||||
!!!!Multiple Outputs
|
||||
Some simplifications are possible in the default case of a node producing just ''one single output'' stream. Otherwise, we'd have to allocate multiple output buffers, and then, after processing, select the one needed as a result and deallocate the superfluous further buffers.
|
||||
|
||||
!!!!in-Place capability
|
||||
If a node is capable of calculating the result by ''modifying it's input'' buffer(s), an important performance optimization is possible, because in a chain of in-place capable nodes, we don't need any buffer allocations. But, on the other hand, this optimization may collide with the caching, because a frame retrieved from cache must not be modified.
|
||||
Without this optimization, in the base case each processing needs an input and an output. Exceptionally, we could think of special nodes which //require// to process in-place, in which case we'd need to provide it with a copy of the input buffer to work on.
|
||||
|
||||
!!!!Processing
|
||||
If ''not processing'' we don't have any input buffers, instead we get our output buffers from an external source.
|
||||
Otherwise, in the default case of actually ''processing'' out output, we have to organize input buffers, allocate output buffers, call the {{{process()}}} function of the WiringDescriptor and finally release the input buffers.
|
||||
</pre>
|
||||
</div>
|
||||
<div title="WiringRequest" modifier="Ichthyostega" created="200810060344" modified="200810060404" tags="def spec">
|
||||
<pre>{{red{This is an early draft as of 9/08}}}
|
||||
Wiring requests rather belong to the realm of the high-level model, but play an important role in the build process, because the result of "executing" a wiring request will be to establish an actual low-level data connection. Wiring requests will be created automatically in the course of the build process, but they can be created manually and attached to the media objects in the high-level model as a ''wiring plug'', which is a special kind of LocatingPin (&rarr; [[Placement]])
|
||||
|
|
|
|||
|
|
@ -28023,9 +28023,7 @@
|
|||
</node>
|
||||
<node CREATED="1677626302701" ID="ID_940600114" MODIFIED="1677626530910">
|
||||
<richcontent TYPE="NODE"><html>
|
||||
<head>
|
||||
|
||||
</head>
|
||||
<head/>
|
||||
<body>
|
||||
<p>
|
||||
<u><font color="#8323d6">Fall-1</font></u>: Font-Size relativ (in Punkten) ⟹
|
||||
|
|
@ -28707,9 +28705,7 @@
|
|||
<node CREATED="1555947359783" FOLDED="true" ID="ID_1366291443" MODIFIED="1561827469137" TEXT="noch schlimmer...">
|
||||
<node CREATED="1555947367949" ID="ID_1145455475" MODIFIED="1557498707228" TEXT="auch die Textbook-Impl von std::apply funktioiniert genauso">
|
||||
<richcontent TYPE="NOTE"><html>
|
||||
<head>
|
||||
|
||||
</head>
|
||||
<head/>
|
||||
<body>
|
||||
<p>
|
||||
d.h. std::get<idx> (std::forward<TUP> (tuple))
|
||||
|
|
@ -29154,9 +29150,7 @@
|
|||
</node>
|
||||
<node BACKGROUND_COLOR="#ccb59b" COLOR="#6e2a38" CREATED="1557590616683" ID="ID_128501904" MODIFIED="1576282358062" TEXT="ich halte es jetzt für gelungen">
|
||||
<richcontent TYPE="NOTE"><html>
|
||||
<head>
|
||||
|
||||
</head>
|
||||
<head/>
|
||||
<body>
|
||||
<p>
|
||||
weil ich es geschafft habe,
|
||||
|
|
@ -29793,9 +29787,7 @@
|
|||
<node CREATED="1674172548813" ID="ID_53484260" MODIFIED="1674172558839" TEXT="Padding für die Content-Höhe genügt nicht"/>
|
||||
<node CREATED="1674345881143" ID="ID_1661063602" MODIFIED="1674345967809" TEXT="TrackBody kann das aber berechnen">
|
||||
<richcontent TYPE="NOTE"><html>
|
||||
<head>
|
||||
|
||||
</head>
|
||||
<head/>
|
||||
<body>
|
||||
<p>
|
||||
...und zwar über den contentOffset, der relativ zur Start-Zeile gemessen wird, sowie der direkten ContentHeight, zuzüglich Padding
|
||||
|
|
@ -30048,9 +30040,7 @@
|
|||
<node CREATED="1566421664831" ID="ID_1051378925" MODIFIED="1566421671379" TEXT="exakt das Gleiche"/>
|
||||
<node CREATED="1566421672116" ID="ID_343684790" MODIFIED="1576282358057" TEXT="nur noch zusätzlich ein Window->win->invalidate_rect">
|
||||
<richcontent TYPE="NOTE"><html>
|
||||
<head>
|
||||
|
||||
</head>
|
||||
<head/>
|
||||
<body>
|
||||
<p>
|
||||
alledings nur, nachdem man das drawing ein/ausgeschaltet hat....
|
||||
|
|
@ -31694,9 +31684,7 @@
|
|||
</node>
|
||||
<node CREATED="1563118023721" ID="ID_189142395" MODIFIED="1563141663430" TEXT="hatte da vor 1 Monat schon mal darüber nachgedacht">
|
||||
<richcontent TYPE="NOTE"><html>
|
||||
<head>
|
||||
|
||||
</head>
|
||||
<head/>
|
||||
<body>
|
||||
<p>
|
||||
und seinerzeit beschlossen, es vorerst im prelude() zu belassen...
|
||||
|
|
@ -31863,9 +31851,7 @@
|
|||
</node>
|
||||
<node CREATED="1561737452937" ID="ID_699098078" MODIFIED="1576282358046">
|
||||
<richcontent TYPE="NODE"><html>
|
||||
<head>
|
||||
|
||||
</head>
|
||||
<head/>
|
||||
<body>
|
||||
<p>
|
||||
speziell die Antwort von <b>ebassi</b> beachten...
|
||||
|
|
@ -32529,9 +32515,7 @@
|
|||
<node CREATED="1564838892422" ID="ID_1063402707" MODIFIED="1564838910760" TEXT="Expliziter Selektor wiegt mehr als generischer Selektor"/>
|
||||
<node CREATED="1564838932673" ID="ID_1845439133" MODIFIED="1564839843544" TEXT="Einsicht nebeinbei: sehe schon, warum die Designer immer CSS vergewaltigen">
|
||||
<richcontent TYPE="NOTE"><html>
|
||||
<head>
|
||||
|
||||
</head>
|
||||
<head/>
|
||||
<body>
|
||||
<p>
|
||||
Die Idee mit dem Cascading funktioniert nämlich nur, wenn man die Oberfläche <i>explizit</i> und <i>bewußt</i>
|
||||
|
|
@ -33388,9 +33372,7 @@
|
|||
</node>
|
||||
<node CREATED="1611486170153" ID="ID_137005272" MODIFIED="1611487171442" TEXT="tatsächlich ist DisplayFrame der CanvasHook">
|
||||
<richcontent TYPE="NOTE"><html>
|
||||
<head>
|
||||
|
||||
</head>
|
||||
<head/>
|
||||
<body>
|
||||
<p>
|
||||
...und zwar für den einzigen relevanten Canvas, das ist nämlich der untere, in der ScrolledPane, mit dem Tack-Content. Wir verwenden inzwischen in jedem Canvas nur noch die lokalen Koordinaten, und daher addieren nun die jeweilgen TrackBody auch ihre eigene startLine_ in lokalen Koordinaten auf. Da der DisplayFrame direkten Zugang zu "seinem" zugehörigen TrackBody hat, bekommen wir über diesen Trick stets punktgenaue, lokale Koordinaten, solange wir uns im Geltungsbereich dieses TrackBody aufhalten. Das bedeutet, theoretisch könnte ein Clip auch weit unterhalb des TrackBody angeheftet werden. So etwas muß dann eigens im DisplayEvaluationPass ausgeschlossen werden
|
||||
|
|
@ -33865,9 +33847,7 @@
|
|||
</body>
|
||||
</html></richcontent>
|
||||
<richcontent TYPE="NOTE"><html>
|
||||
<head>
|
||||
|
||||
</head>
|
||||
<head/>
|
||||
<body>
|
||||
<p>
|
||||
...und muß damit in den State-Change-Mechanismus für den Präsentationsstil verlegt werden
|
||||
|
|
@ -92266,7 +92246,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
<icon BUILTIN="button_ok"/>
|
||||
</node>
|
||||
</node>
|
||||
<node BACKGROUND_COLOR="#c8c0b6" COLOR="#435e98" CREATED="1734282698342" ID="ID_1166367446" MODIFIED="1734738888812" TEXT="Aufgabe: Konstruktor-Parameter hängen von der Konfiguration ab">
|
||||
<node BACKGROUND_COLOR="#c8c0b6" COLOR="#435e98" CREATED="1734282698342" FOLDED="true" ID="ID_1166367446" MODIFIED="1734831323276" TEXT="Aufgabe: Konstruktor-Parameter hängen von der Konfiguration ab">
|
||||
<linktarget COLOR="#2143ae" DESTINATION="ID_1166367446" ENDARROW="Default" ENDINCLINATION="-134;6;" ID="Arrow_ID_1369831225" SOURCE="ID_151535815" STARTARROW="None" STARTINCLINATION="-270;14;"/>
|
||||
<linktarget COLOR="#4546d4" DESTINATION="ID_1166367446" ENDARROW="Default" ENDINCLINATION="-4;229;" ID="Arrow_ID_698609212" SOURCE="ID_764351741" STARTARROW="None" STARTINCLINATION="62;-202;"/>
|
||||
<icon BUILTIN="clanbomber"/>
|
||||
|
|
@ -92299,7 +92279,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
</node>
|
||||
</node>
|
||||
</node>
|
||||
<node COLOR="#434598" CREATED="1734565250264" ID="ID_225480799" MODIFIED="1734755909324" TEXT="Typ-Konfiguration">
|
||||
<node COLOR="#434598" CREATED="1734565250264" FOLDED="true" ID="ID_225480799" MODIFIED="1734755909324" TEXT="Typ-Konfiguration">
|
||||
<linktarget COLOR="#3924a1" DESTINATION="ID_225480799" ENDARROW="Default" ENDINCLINATION="42;177;" ID="Arrow_ID_1550271356" SOURCE="ID_1507739045" STARTARROW="None" STARTINCLINATION="202;11;"/>
|
||||
<font BOLD="true" NAME="SansSerif" SIZE="12"/>
|
||||
<icon BUILTIN="yes"/>
|
||||
|
|
@ -92481,7 +92461,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
<icon BUILTIN="button_ok"/>
|
||||
</node>
|
||||
</node>
|
||||
<node COLOR="#338800" CREATED="1734572571310" ID="ID_762884176" MODIFIED="1734724836381" TEXT="Traits-Template für Parmeter-Funktor vorsehen">
|
||||
<node COLOR="#338800" CREATED="1734572571310" FOLDED="true" ID="ID_762884176" MODIFIED="1734724836381" TEXT="Traits-Template für Parmeter-Funktor vorsehen">
|
||||
<icon BUILTIN="button_ok"/>
|
||||
<node COLOR="#435e98" CREATED="1734642562758" ID="ID_124741811" MODIFIED="1734642577949" TEXT="_ParamFun<FUN>">
|
||||
<font BOLD="true" NAME="SansSerif" SIZE="12"/>
|
||||
|
|
@ -92704,7 +92684,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
</node>
|
||||
</node>
|
||||
</node>
|
||||
<node COLOR="#338800" CREATED="1734572439920" ID="ID_955026014" MODIFIED="1734724830701" TEXT="createFeed (TurnoutSystem&) implementieren">
|
||||
<node COLOR="#338800" CREATED="1734572439920" FOLDED="true" ID="ID_955026014" MODIFIED="1734724830701" TEXT="createFeed (TurnoutSystem&) implementieren">
|
||||
<linktarget COLOR="#4798a5" DESTINATION="ID_955026014" ENDARROW="Default" ENDINCLINATION="-620;38;" ID="Arrow_ID_278216824" SOURCE="ID_112390056" STARTARROW="None" STARTINCLINATION="1203;53;"/>
|
||||
<icon BUILTIN="button_ok"/>
|
||||
<node COLOR="#5b280f" CREATED="1734642711594" ID="ID_628059323" MODIFIED="1734642730462" TEXT="man könnte hier von der Konfiguration ein Lambda generieren lassen">
|
||||
|
|
@ -92740,8 +92720,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
...das war der Name im Prototyping-Entwurf, und der ist viel besser!
|
||||
</p>
|
||||
</body>
|
||||
</html>
|
||||
</richcontent>
|
||||
</html></richcontent>
|
||||
<linktarget COLOR="#574bd8" DESTINATION="ID_436769976" ENDARROW="Default" ENDINCLINATION="96;267;" ID="Arrow_ID_1852811022" SOURCE="ID_417738434" STARTARROW="None" STARTINCLINATION="710;0;"/>
|
||||
<icon BUILTIN="yes"/>
|
||||
</node>
|
||||
|
|
@ -92855,7 +92834,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
</node>
|
||||
</node>
|
||||
<node BACKGROUND_COLOR="#eef0c5" COLOR="#990000" CREATED="1734739984644" HGAP="-128" ID="ID_366268182" MODIFIED="1734755313572" TEXT="Typ-Anpassungen aus dem Umbau der FeedManifold" VSHIFT="6">
|
||||
<linktarget COLOR="#fe1573" DESTINATION="ID_366268182" ENDARROW="Default" ENDINCLINATION="-1176;63;" ID="Arrow_ID_39556747" SOURCE="ID_86160844" STARTARROW="None" STARTINCLINATION="-112;6;"/>
|
||||
<linktarget COLOR="#fe1573" DESTINATION="ID_366268182" ENDARROW="Default" ENDINCLINATION="-1176;63;" ID="Arrow_ID_39556747" SOURCE="ID_86160844" STARTARROW="None" STARTINCLINATION="-122;6;"/>
|
||||
<icon BUILTIN="pencil"/>
|
||||
<node BACKGROUND_COLOR="#ecdbc7" CREATED="1734740397311" ID="ID_1507739045" MODIFIED="1734740758439" STYLE="fork">
|
||||
<richcontent TYPE="NODE"><html>
|
||||
|
|
@ -92878,8 +92857,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
«<font color="#721f3c"><b>InvocationAdapter</b></font>» ist nun stets die <b><font color="#4a0ddb">FeedManifold</font></b>
|
||||
</p>
|
||||
</body>
|
||||
</html>
|
||||
</richcontent>
|
||||
</html></richcontent>
|
||||
<icon BUILTIN="forward"/>
|
||||
</node>
|
||||
<node BACKGROUND_COLOR="#fdfdcf" COLOR="#ff0000" CREATED="1734740640102" ID="ID_1102101096" MODIFIED="1734755323681" TEXT="kein direkter Zugriff auf FunSpec / _ProcFun oder sonstige Typ-Logik">
|
||||
|
|
@ -92898,8 +92876,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
muß <i>von unten</i> kommen
|
||||
</p>
|
||||
</body>
|
||||
</html>
|
||||
</richcontent>
|
||||
</html></richcontent>
|
||||
<icon BUILTIN="button_ok"/>
|
||||
<node CREATED="1734743847378" ID="ID_1975400845" LINK="#ID_1418298495" MODIFIED="1734743941822" TEXT="die aktuelle "fillRemaining"-Lösung war bloße Verlegenheit">
|
||||
<richcontent TYPE="NOTE"><html>
|
||||
|
|
@ -92934,8 +92911,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
diese Schachtel möchte ich nicht nach außen aufmachen...
|
||||
</p>
|
||||
</body>
|
||||
</html>
|
||||
</richcontent>
|
||||
</html></richcontent>
|
||||
</node>
|
||||
<node BACKGROUND_COLOR="#e0ceaa" COLOR="#690f14" CREATED="1734747837427" ID="ID_1236798413" MODIFIED="1734748747436" TEXT="dann bietet der Prototyp eben interne Iteration">
|
||||
<icon BUILTIN="stop-sign"/>
|
||||
|
|
@ -92956,8 +92932,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
</li>
|
||||
</ul>
|
||||
</body>
|
||||
</html>
|
||||
</richcontent>
|
||||
</html></richcontent>
|
||||
</node>
|
||||
<node COLOR="#338800" CREATED="1734748860873" ID="ID_130759425" MODIFIED="1734754850113" TEXT="Descriptor-Tupel instantiieren lassen">
|
||||
<icon BUILTIN="button_ok"/>
|
||||
|
|
@ -92983,8 +92958,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
<u>Lösung</u>: Iteration über ein Buffer-Descriptor-Tupel
|
||||
</p>
|
||||
</body>
|
||||
</html>
|
||||
</richcontent>
|
||||
</html></richcontent>
|
||||
<icon BUILTIN="forward"/>
|
||||
</node>
|
||||
<node BACKGROUND_COLOR="#dfc1e7" COLOR="#5c4d6e" CREATED="1734743957986" ID="ID_25025378" MODIFIED="1734743976965" STYLE="fork" TEXT="Spätere Erweiterungen">
|
||||
|
|
@ -93490,7 +93464,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
</node>
|
||||
</node>
|
||||
</node>
|
||||
<node COLOR="#338800" CREATED="1734133400400" ID="ID_1364724277" MODIFIED="1734726554241" TEXT="zusätzlichen Funktor für die Parameter akzeptieren">
|
||||
<node COLOR="#338800" CREATED="1734133400400" FOLDED="true" ID="ID_1364724277" MODIFIED="1734726554241" TEXT="zusätzlichen Funktor für die Parameter akzeptieren">
|
||||
<arrowlink COLOR="#0299c0" DESTINATION="ID_1127056731" ENDARROW="Default" ENDINCLINATION="-1257;-48;" ID="Arrow_ID_1717201620" STARTARROW="None" STARTINCLINATION="-908;50;"/>
|
||||
<icon BUILTIN="button_ok"/>
|
||||
<node BACKGROUND_COLOR="#c8c0b6" COLOR="#435e98" CREATED="1734562038955" ID="ID_692448245" MODIFIED="1734726547060" TEXT="Aufruf-Situation bedenken">
|
||||
|
|
@ -93635,10 +93609,10 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
<node BACKGROUND_COLOR="#eee5c3" COLOR="#990000" CREATED="1734133441531" ID="ID_109108903" MODIFIED="1734133509736" TEXT="Turnout-System mit Storage implementieren">
|
||||
<icon BUILTIN="flag-yellow"/>
|
||||
</node>
|
||||
<node BACKGROUND_COLOR="#eee5c3" COLOR="#990000" CREATED="1734133474862" ID="ID_1734639141" MODIFIED="1734133509736" TEXT="MediaWeavingPattern intern anpassen">
|
||||
<icon BUILTIN="flag-yellow"/>
|
||||
<node BACKGROUND_COLOR="#eef0c5" COLOR="#990000" CREATED="1734730549338" ID="ID_979346169" MODIFIED="1734755806023" TEXT="Schwenk auf die neue Architektur">
|
||||
<icon BUILTIN="pencil"/>
|
||||
<node COLOR="#338800" CREATED="1734133474862" ID="ID_1734639141" MODIFIED="1734831257737" TEXT="MediaWeavingPattern intern anpassen">
|
||||
<icon BUILTIN="button_ok"/>
|
||||
<node COLOR="#338800" CREATED="1734730549338" ID="ID_979346169" MODIFIED="1734831179403" TEXT="Schwenk auf die neue Architektur">
|
||||
<icon BUILTIN="button_ok"/>
|
||||
<node CREATED="1734730561471" ID="ID_645384678" MODIFIED="1734732327705">
|
||||
<richcontent TYPE="NODE"><html>
|
||||
<head/>
|
||||
|
|
@ -93647,8 +93621,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
<u>Strategie</u>: während des Umbaues den alten downstream-Code compilierbar gehalten
|
||||
</p>
|
||||
</body>
|
||||
</html>
|
||||
</richcontent>
|
||||
</html></richcontent>
|
||||
<richcontent TYPE="NOTE"><html>
|
||||
<head/>
|
||||
<body>
|
||||
|
|
@ -93656,8 +93629,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
und zwar, indem ich zunächst die Type-Traits umgestellt habe, und dann den alten Code auf das neue Traits-Inteface portiert. Damit konnte ich die alte Implementierung der FeedManifold (als "FoldManifeed" ☺) im Code erhalten — und alles was darunter hängt...
|
||||
</p>
|
||||
</body>
|
||||
</html>
|
||||
</richcontent>
|
||||
</html></richcontent>
|
||||
<icon BUILTIN="info"/>
|
||||
</node>
|
||||
<node CREATED="1734730703285" ID="ID_1632513810" MODIFIED="1734732313042" TEXT="Prinzip der Umstellung">
|
||||
|
|
@ -93685,8 +93657,9 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
</node>
|
||||
<node CREATED="1734731013292" ID="ID_1739494059" MODIFIED="1734731049860" TEXT="Konsequenz: DirectFunctionInvocation ≡ INVO im MediaWeavingPattern"/>
|
||||
</node>
|
||||
<node CREATED="1734731085818" ID="ID_815034881" MODIFIED="1734731091960" TEXT="Schritte">
|
||||
<node COLOR="#435e98" CREATED="1734731093345" ID="ID_1788687876" MODIFIED="1734738422927" TEXT="Alle Verwendungen von INVO durchgehen">
|
||||
<node COLOR="#338800" CREATED="1734731085818" ID="ID_815034881" MODIFIED="1734831167821" TEXT="Schritte">
|
||||
<icon BUILTIN="button_ok"/>
|
||||
<node COLOR="#435e98" CREATED="1734731093345" FOLDED="true" ID="ID_1788687876" MODIFIED="1734831204758" TEXT="Alle Verwendungen von INVO durchgehen">
|
||||
<icon BUILTIN="full-1"/>
|
||||
<node CREATED="1734731825070" ID="ID_380614864" MODIFIED="1734738412305" TEXT="INVO::Feed">
|
||||
<icon BUILTIN="button_ok"/>
|
||||
|
|
@ -93705,8 +93678,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
Haha! Nur ist das jetzt eines der zentralen Gelenkstellen im FeedPrototype geworden — oh oh oh wenn das alles bloß nicht so spannend wäre, könnte man ja glatt anderen Leuten davon erzählen
|
||||
</p>
|
||||
</body>
|
||||
</html>
|
||||
</richcontent>
|
||||
</html></richcontent>
|
||||
<arrowlink COLOR="#574bd8" DESTINATION="ID_436769976" ENDARROW="Default" ENDINCLINATION="96;267;" ID="Arrow_ID_1852811022" STARTARROW="None" STARTINCLINATION="710;0;"/>
|
||||
<icon BUILTIN="ksmiletris"/>
|
||||
</node>
|
||||
|
|
@ -93726,8 +93698,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
ich habe die Einbindung in ein fixed-size-Array der Größe N komplett aufgegeben, zugunsten einer flexiblen, Tuple-basierten Storage
|
||||
</p>
|
||||
</body>
|
||||
</html>
|
||||
</richcontent>
|
||||
</html></richcontent>
|
||||
</node>
|
||||
<node COLOR="#338800" CREATED="1734732101982" ID="ID_1277186217" MODIFIED="1734738065499" TEXT="die Prüfung könnte verschäfrt werden">
|
||||
<icon BUILTIN="button_ok"/>
|
||||
|
|
@ -93775,7 +93746,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
<icon BUILTIN="button_ok"/>
|
||||
</node>
|
||||
</node>
|
||||
<node COLOR="#435e98" CREATED="1734731121493" ID="ID_620770826" MODIFIED="1734754350499" TEXT="Konstruktor DirectFunctionInvocation ⟶ neues Builder-API">
|
||||
<node COLOR="#435e98" CREATED="1734731121493" FOLDED="true" ID="ID_620770826" MODIFIED="1734754350499" TEXT="Konstruktor DirectFunctionInvocation ⟶ neues Builder-API">
|
||||
<icon BUILTIN="full-2"/>
|
||||
<node BACKGROUND_COLOR="#ddd0b6" CREATED="1734731288131" HGAP="24" ID="ID_533282289" MODIFIED="1734739666626" TEXT="das ist ein Argument Pass-through im MediaWeavingPattern" VSHIFT="1">
|
||||
<richcontent TYPE="NOTE"><html>
|
||||
|
|
@ -93785,8 +93756,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
alle Argumente ab dem 4.Argument gehen pauschal durch an INVO
|
||||
</p>
|
||||
</body>
|
||||
</html>
|
||||
</richcontent>
|
||||
</html></richcontent>
|
||||
<linktarget COLOR="#606872" DESTINATION="ID_533282289" ENDARROW="Default" ENDINCLINATION="-44;2;" ID="Arrow_ID_473003071" SOURCE="ID_202344901" STARTARROW="None" STARTINCLINATION="-18;13;"/>
|
||||
<icon BUILTIN="idea"/>
|
||||
</node>
|
||||
|
|
@ -93806,7 +93776,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
</node>
|
||||
</node>
|
||||
</node>
|
||||
<node COLOR="#435e98" CREATED="1734754367664" ID="ID_62712804" MODIFIED="1734755554512" TEXT="Typ-Zugriffe im WeavingPatternBuilder vorläufig schwenken">
|
||||
<node COLOR="#435e98" CREATED="1734754367664" FOLDED="true" ID="ID_62712804" MODIFIED="1734831214361" TEXT="Typ-Zugriffe im WeavingPatternBuilder vorläufig schwenken">
|
||||
<arrowlink COLOR="#46476c" DESTINATION="ID_86160844" ENDARROW="Default" ENDINCLINATION="-72;-50;" ID="Arrow_ID_245251166" STARTARROW="None" STARTINCLINATION="-128;6;"/>
|
||||
<icon BUILTIN="full-3"/>
|
||||
<node CREATED="1734754525235" ID="ID_247981581" MODIFIED="1734754613970" TEXT="Port- und WeavingPattern-Typ neu aufbauen">
|
||||
|
|
@ -93816,26 +93786,26 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
<arrowlink COLOR="#853658" DESTINATION="ID_1707049286" ENDARROW="Default" ENDINCLINATION="-171;10;" ID="Arrow_ID_1775048050" STARTARROW="None" STARTINCLINATION="-4;72;"/>
|
||||
</node>
|
||||
</node>
|
||||
<node COLOR="#435e98" CREATED="1734731190526" ID="ID_136921376" MODIFIED="1734755557544" TEXT="NodeLinkage_test wieder zum Laufen bekommen">
|
||||
<node COLOR="#435e98" CREATED="1734731190526" FOLDED="true" ID="ID_136921376" MODIFIED="1734755557544" TEXT="NodeLinkage_test wieder zum Laufen bekommen">
|
||||
<icon BUILTIN="full-4"/>
|
||||
<node BACKGROUND_COLOR="#dee8ae" COLOR="#116b3a" CREATED="1734755559673" HGAP="37" ID="ID_1731960603" MODIFIED="1734755679627" TEXT="Läuft AUF ANHIEB!!!!" VSHIFT="23">
|
||||
<font NAME="SansSerif" SIZE="16"/>
|
||||
<icon BUILTIN="ksmiletris"/>
|
||||
</node>
|
||||
</node>
|
||||
<node BACKGROUND_COLOR="#eee5c3" COLOR="#990000" CREATED="1734731202514" ID="ID_302715819" MODIFIED="1734754497522" TEXT="obsoleten Code wegräumen">
|
||||
<node COLOR="#435e98" CREATED="1734731202514" ID="ID_302715819" MODIFIED="1734831126160" TEXT="obsoleten Code wegräumen">
|
||||
<icon BUILTIN="full-5"/>
|
||||
</node>
|
||||
</node>
|
||||
<node BACKGROUND_COLOR="#eef0c5" COLOR="#990000" CREATED="1734739954996" ID="ID_86160844" MODIFIED="1734755367231" TEXT="Typ-Anpassungen in den Weaving-Builder übernehmen">
|
||||
<arrowlink COLOR="#fe1573" DESTINATION="ID_366268182" ENDARROW="Default" ENDINCLINATION="-1176;63;" ID="Arrow_ID_39556747" STARTARROW="None" STARTINCLINATION="-112;6;"/>
|
||||
<node COLOR="#338800" CREATED="1734739954996" ID="ID_86160844" MODIFIED="1734831186623" TEXT="Typ-Anpassungen in den Weaving-Builder übernehmen">
|
||||
<arrowlink COLOR="#fe1573" DESTINATION="ID_366268182" ENDARROW="Default" ENDINCLINATION="-1176;63;" ID="Arrow_ID_39556747" STARTARROW="None" STARTINCLINATION="-122;6;"/>
|
||||
<linktarget COLOR="#46476c" DESTINATION="ID_86160844" ENDARROW="Default" ENDINCLINATION="-72;-50;" ID="Arrow_ID_245251166" SOURCE="ID_62712804" STARTARROW="None" STARTINCLINATION="-128;6;"/>
|
||||
<icon BUILTIN="pencil"/>
|
||||
<icon BUILTIN="button_ok"/>
|
||||
<node CREATED="1734740175883" ID="ID_1540451" MODIFIED="1734754608354" TEXT="FeedPrototype explizit als Einstiegspunkt in das WeavingPattern">
|
||||
<linktarget COLOR="#493bce" DESTINATION="ID_1540451" ENDARROW="Default" ENDINCLINATION="-250;-95;" ID="Arrow_ID_1514499146" SOURCE="ID_1622390528" STARTARROW="None" STARTINCLINATION="-259;12;"/>
|
||||
<linktarget COLOR="#4765ac" DESTINATION="ID_1540451" ENDARROW="Default" ENDINCLINATION="91;0;" ID="Arrow_ID_48125158" SOURCE="ID_247981581" STARTARROW="None" STARTINCLINATION="124;8;"/>
|
||||
</node>
|
||||
<node CREATED="1734740131793" ID="ID_1426590563" MODIFIED="1734754683387" TEXT="einheitlicher Buffer-Typ fällt weg">
|
||||
<node COLOR="#435e98" CREATED="1734740131793" FOLDED="true" ID="ID_1426590563" MODIFIED="1734831244949" TEXT="einheitlicher Buffer-Typ fällt weg">
|
||||
<icon BUILTIN="messagebox_warning"/>
|
||||
<node CREATED="1734741263982" ID="ID_1209777361" MODIFIED="1734741275195" TEXT="bisher in fillRemainingBufferTypes">
|
||||
<node CREATED="1734741312434" ID="ID_1820203884" MODIFIED="1734741318921" TEXT="wird vom NodeBuilder aktiviert"/>
|
||||
|
|
@ -93873,7 +93843,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
</node>
|
||||
</node>
|
||||
</node>
|
||||
<node BACKGROUND_COLOR="#e0ceaa" COLOR="#690f44" CREATED="1734741417311" ID="ID_1418298495" MODIFIED="1734741448586" TEXT="Verdacht: das ist ein Rest der »flexiblen Verdrahtung«">
|
||||
<node BACKGROUND_COLOR="#e0ceaa" COLOR="#690f44" CREATED="1734741417311" FOLDED="true" ID="ID_1418298495" MODIFIED="1734831236790" TEXT="Verdacht: das ist ein Rest der »flexiblen Verdrahtung«">
|
||||
<icon BUILTIN="broken-line"/>
|
||||
<node CREATED="1734741453255" ID="ID_146334026" MODIFIED="1734741484253" TEXT="anfangs habe ich auf allen Ebenen komplette Flexibilität angenommen"/>
|
||||
<node CREATED="1734741487282" ID="ID_1358822496" MODIFIED="1734741507140" TEXT="das ließ sich nicht durchführen ⟹ Prototyp der default-1:1-Logik"/>
|
||||
|
|
@ -93913,8 +93883,8 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
</node>
|
||||
</node>
|
||||
</node>
|
||||
<node BACKGROUND_COLOR="#eee5c3" COLOR="#990000" CREATED="1734755784065" ID="ID_305162421" MODIFIED="1734755800119" TEXT="Rückbau und Dokumentation">
|
||||
<icon BUILTIN="flag-yellow"/>
|
||||
<node COLOR="#338800" CREATED="1734755784065" ID="ID_305162421" MODIFIED="1734831255516" TEXT="Rückbau und Dokumentation">
|
||||
<icon BUILTIN="button_ok"/>
|
||||
</node>
|
||||
</node>
|
||||
<node BACKGROUND_COLOR="#eee5c3" COLOR="#990000" CREATED="1734133492043" ID="ID_901061219" MODIFIED="1734133509737" TEXT="ParamWeavingPattern hinzubauen">
|
||||
|
|
@ -96935,10 +96905,18 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
</node>
|
||||
</node>
|
||||
</node>
|
||||
<node BACKGROUND_COLOR="#eef0c5" COLOR="#990000" CREATED="1734059951987" ID="ID_1795912761" MODIFIED="1734060532006" TEXT="bestehenden Entwurf um Parameter ergänzen">
|
||||
<node BACKGROUND_COLOR="#eef0c5" COLOR="#990000" CREATED="1734059951987" ID="ID_1795912761" MODIFIED="1734831817696" TEXT="bestehenden Entwurf um Parameter ergänzen">
|
||||
<arrowlink COLOR="#b02152" DESTINATION="ID_1347066000" ENDARROW="Default" ENDINCLINATION="-739;36;" ID="Arrow_ID_1367798112" STARTARROW="None" STARTINCLINATION="532;31;"/>
|
||||
<linktarget COLOR="#da1877" DESTINATION="ID_1795912761" ENDARROW="Default" ENDINCLINATION="-997;61;" ID="Arrow_ID_1582668720" SOURCE="ID_1344849864" STARTARROW="None" STARTINCLINATION="-507;-39;"/>
|
||||
<linktarget COLOR="#5718da" DESTINATION="ID_1795912761" ENDARROW="Default" ENDINCLINATION="-997;61;" ID="Arrow_ID_1582668720" SOURCE="ID_1344849864" STARTARROW="None" STARTINCLINATION="-507;-39;"/>
|
||||
<linktarget COLOR="#787a90" DESTINATION="ID_1795912761" ENDARROW="Default" ENDINCLINATION="4;118;" ID="Arrow_ID_804587938" SOURCE="ID_1862852275" STARTARROW="None" STARTINCLINATION="-427;0;"/>
|
||||
<icon BUILTIN="pencil"/>
|
||||
<node BACKGROUND_COLOR="#e0ceaa" COLOR="#690f14" CREATED="1734831402991" HGAP="33" ID="ID_1684320466" MODIFIED="1734831419407" TEXT="war ein heftiger Umbau" VSHIFT="17"/>
|
||||
<node BACKGROUND_COLOR="#c8c0b6" COLOR="#435e98" CREATED="1734831421419" ID="ID_389961461" MODIFIED="1734831459733" TEXT="Invocation-Struktur damit festgelegt">
|
||||
<icon BUILTIN="idea"/>
|
||||
</node>
|
||||
<node COLOR="#484398" CREATED="1734831462029" ID="ID_1441507649" MODIFIED="1734831483684" TEXT="(Ergebnis ist wesentlich verbessert)">
|
||||
<font NAME="SansSerif" SIZE="11"/>
|
||||
</node>
|
||||
</node>
|
||||
<node BACKGROUND_COLOR="#eee5c3" COLOR="#990000" CREATED="1733428913753" ID="ID_1660514427" MODIFIED="1733428922824" TEXT="zugehörige ParamAgent-Nodes">
|
||||
<icon BUILTIN="flag-yellow"/>
|
||||
|
|
@ -97746,9 +97724,9 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
</node>
|
||||
<node BACKGROUND_COLOR="#eee5c3" COLOR="#990000" CREATED="1733430358414" ID="ID_1127795681" MODIFIED="1733430415712" TEXT="Parametrisierbarkeit der Operationen darstellen">
|
||||
<icon BUILTIN="flag-yellow"/>
|
||||
<node BACKGROUND_COLOR="#eef0c5" COLOR="#990000" CREATED="1733430421541" ID="ID_1267000845" MODIFIED="1733430780646" TEXT="Klärung konzeptioneller Grundlagen">
|
||||
<node COLOR="#338800" CREATED="1733430421541" ID="ID_1267000845" MODIFIED="1734831531586" TEXT="Klärung konzeptioneller Grundlagen">
|
||||
<arrowlink COLOR="#7e1ab2" DESTINATION="ID_1750696847" ENDARROW="Default" ENDINCLINATION="266;-492;" ID="Arrow_ID_379194887" STARTARROW="None" STARTINCLINATION="-138;440;"/>
|
||||
<icon BUILTIN="pencil"/>
|
||||
<icon BUILTIN="button_ok"/>
|
||||
</node>
|
||||
<node BACKGROUND_COLOR="#eee5c3" COLOR="#990000" CREATED="1733430838925" ID="ID_1513206906" MODIFIED="1733533298660" TEXT="Exkurs: Einbindung in den Node-Builder betrachten">
|
||||
<arrowlink COLOR="#c50127" DESTINATION="ID_1619015453" ENDARROW="Default" ENDINCLINATION="-153;535;" ID="Arrow_ID_1374888051" STARTARROW="None" STARTINCLINATION="719;37;"/>
|
||||
|
|
@ -97812,8 +97790,8 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
<icon BUILTIN="yes"/>
|
||||
<icon BUILTIN="pencil"/>
|
||||
</node>
|
||||
<node BACKGROUND_COLOR="#eee5c3" COLOR="#990000" CREATED="1734059643300" ID="ID_1344849864" MODIFIED="1734060101312" TEXT="Param-Storage in Weaving-Pattern einführen">
|
||||
<arrowlink COLOR="#da1877" DESTINATION="ID_1795912761" ENDARROW="Default" ENDINCLINATION="-997;61;" ID="Arrow_ID_1582668720" STARTARROW="None" STARTINCLINATION="-507;-39;"/>
|
||||
<node COLOR="#435e98" CREATED="1734059643300" ID="ID_1344849864" MODIFIED="1734831521304" TEXT="Param-Storage in Weaving-Pattern einführen">
|
||||
<arrowlink COLOR="#5718da" DESTINATION="ID_1795912761" ENDARROW="Default" ENDINCLINATION="-997;61;" ID="Arrow_ID_1582668720" STARTARROW="None" STARTINCLINATION="-507;-39;"/>
|
||||
<icon BUILTIN="yes"/>
|
||||
</node>
|
||||
</node>
|
||||
|
|
@ -97955,7 +97933,16 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
<icon BUILTIN="hourglass"/>
|
||||
<node BACKGROUND_COLOR="#fdfdcf" COLOR="#ff0000" CREATED="1728786146493" ID="ID_1993277373" MODIFIED="1733429049746" TEXT="Einstieg in das Turnout-System klären">
|
||||
<icon BUILTIN="flag-pink"/>
|
||||
<node BACKGROUND_COLOR="#f8f1cb" COLOR="#a50125" CREATED="1733012850028" ID="ID_1750696847" MODIFIED="1733430472564" TEXT="ungeklärte Aufgabe: Parameter in die Berechungsfunktion einspeisen">
|
||||
<node BACKGROUND_COLOR="#c8c0b6" COLOR="#413b96" CREATED="1733012850028" ID="ID_1750696847" MODIFIED="1734831646520">
|
||||
<richcontent TYPE="NODE"><html>
|
||||
<head/>
|
||||
<body>
|
||||
<p>
|
||||
<u>essentielle Aufgabe</u>: <b>Parameter</b> in die Berechungsfunktion einspeisen
|
||||
</p>
|
||||
</body>
|
||||
</html>
|
||||
</richcontent>
|
||||
<linktarget COLOR="#7e1ab2" DESTINATION="ID_1750696847" ENDARROW="Default" ENDINCLINATION="266;-492;" ID="Arrow_ID_379194887" SOURCE="ID_1267000845" STARTARROW="None" STARTINCLINATION="-138;440;"/>
|
||||
<icon BUILTIN="messagebox_warning"/>
|
||||
<node CREATED="1733012908355" ID="ID_1531245395" MODIFIED="1733012926789" TEXT="für die konkrete Implementierungs-Operation sind das Funktionsparameter"/>
|
||||
|
|
@ -98006,7 +97993,7 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
</body>
|
||||
</html></richcontent>
|
||||
</node>
|
||||
<node CREATED="1733430520384" ID="ID_1927401834" LINK="#ID_1734022233" MODIFIED="1733430746380" TEXT="ParamAgent-Node als Adapter vorgesehen">
|
||||
<node COLOR="#5b280f" CREATED="1733430520384" ID="ID_1927401834" LINK="#ID_1734022233" MODIFIED="1734831721652" TEXT="ParamAgent-Node als Adapter vorgesehen">
|
||||
<richcontent TYPE="NOTE"><html>
|
||||
<head/>
|
||||
<body>
|
||||
|
|
@ -98015,6 +98002,24 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
</p>
|
||||
</body>
|
||||
</html></richcontent>
|
||||
<icon BUILTIN="button_cancel"/>
|
||||
</node>
|
||||
<node BACKGROUND_COLOR="#c8c0b6" CREATED="1734831689023" ID="ID_1862852275" MODIFIED="1734831817696" TEXT="Nein: stattdessen Parameter direkt in die FeedManifold mit eingebaut">
|
||||
<arrowlink COLOR="#787a90" DESTINATION="ID_1795912761" ENDARROW="Default" ENDINCLINATION="4;118;" ID="Arrow_ID_804587938" STARTARROW="None" STARTINCLINATION="-427;0;"/>
|
||||
<icon BUILTIN="yes"/>
|
||||
<node CREATED="1734831851832" ID="ID_1677160085" MODIFIED="1734832016224" TEXT="das war ein tiefer Eingriff in den bestehenden Entwurf...">
|
||||
<richcontent TYPE="NOTE"><html>
|
||||
<head/>
|
||||
<body>
|
||||
<p>
|
||||
und ehrlich gesagt, ziemlich heftig — nach dem Motto »jetzt oder nie« habe ich effektiv die C-Arrays und void* über Bord geworfen und die gesamte Invocation-Storage auf typisierte Daten und vor allem <b>Tupel</b>  aufgebaut. Der Level an Metaprogramming ist nun zwar viel konzentrierter und tief in der Implementierung versteckt, geht aber an Heftigkeit noch weit über den allerersten Entwurf zur Render-Engine (von 2009) hinaus. Auch die Loki-Typlisten sind wieder mit dabei...
|
||||
</p>
|
||||
</body>
|
||||
</html>
|
||||
</richcontent>
|
||||
<font ITALIC="true" NAME="SansSerif" SIZE="11"/>
|
||||
</node>
|
||||
<node CREATED="1734832020401" ID="ID_1544407434" LINK="#ID_62561618" MODIFIED="1734832048653" TEXT="läuft nun auf ein »Kompromiß-Modell hinaus"/>
|
||||
</node>
|
||||
</node>
|
||||
</node>
|
||||
|
|
@ -98533,6 +98538,18 @@ Date:   Thu Apr 20 18:53:17 2023 +0200<br/>
|
|||
<node CREATED="1733423697567" ID="ID_1377737473" MODIFIED="1733423899519" TEXT="das TurnoutSystem dient als Daten-Austausch-Hub"/>
|
||||
<node CREATED="1733423734224" ID="ID_170887086" MODIFIED="1733423899519" TEXT="die Default-Ausstattung ermöglicht Zugriff auf die Invocation-Parameter"/>
|
||||
<node CREATED="1733423820748" ID="ID_951224237" MODIFIED="1733423899519" TEXT="der Level-2 Builder stellt Methoden bereit zum Anlegen von Parameter-Nodes"/>
|
||||
<node COLOR="#471e21" CREATED="1734832077434" ID="ID_1709268968" MODIFIED="1734832147899" TEXT="letztlich spielen nun die »Agent-Nodes« keine Rolle mehr für das Parameter-Handling">
|
||||
<richcontent TYPE="NOTE"><html>
|
||||
<head/>
|
||||
<body>
|
||||
<p>
|
||||
...denn wir haben nun (nach einem heftigen Umbau) direkt eine typisierte, strukturierte Storage in der FeedManifold geschaffen
|
||||
</p>
|
||||
</body>
|
||||
</html>
|
||||
</richcontent>
|
||||
<icon BUILTIN="yes"/>
|
||||
</node>
|
||||
</node>
|
||||
</node>
|
||||
</node>
|
||||
|
|
|
|||
Loading…
Reference in a new issue