Wiring concept (I) -- stream connections
This commit is contained in:
parent
c744a538e0
commit
2eafc5f532
1 changed files with 41 additions and 14 deletions
|
|
@ -1902,7 +1902,7 @@ For this Lumiera design, we could consider making GOP just another raw media dat
|
|||
→see in [[Wikipedia|http://en.wikipedia.org/wiki/Group_of_pictures]]
|
||||
</pre>
|
||||
</div>
|
||||
<div title="GlobalPipe" modifier="Ichthyostega" modified="201007190107" created="201007110200" tags="Model design spec draft" changecount="22">
|
||||
<div title="GlobalPipe" modifier="Ichthyostega" modified="201011072139" created="201007110200" tags="Model design spec draft" changecount="23">
|
||||
<pre>Each [[Timeline]] has an associated set of global [[pipes|Pipe]] (global busses), similar to the subgroups of a sound mixing desk.
|
||||
In the typical standard configuration, there is (at least) a video master and a sound master pipe. Like any pipe, ingoing connections attach to the input side, attached effects form a chain, where the last node acts as exit node. The ~Pipe-ID of such a global bus can be used to route media streams, allowing the global pipe to act as a summation bus bar.
|
||||
|
||||
|
|
@ -1924,7 +1924,7 @@ While there might still be some compromises or combined solutions &mdash; to
|
|||
|Extras: | ? |tree-like bus arrangement,<br/> multi-stream bus |
|
||||
|
||||
So through this detailed comparison ''~Model-A looks favourable'': while the other model requires us to invent a good deal of the handling specifically for the global pipes, the former can be combined from patterns and solutions already used in other parts of the model, plus it allows some interesting extensions.
|
||||
On a second thought, the fact that the Bus-~MObject is rather void of any specific meaning doesn't weight so much: As the Builder is based on the visitor pattern, the individual objects can be seen as //algebraic data types.// Besides, there is at least one little bit of specific functionality: a Bus object actually needs to //claim//&nbsp; to be the OutputDesignation, by referring to the same ~Pipe-ID used in other parts of the model to request output routing to this Bus. Without this match on both ends, an ~OutputDesignation may be mentioned at will, but no connection whatsoever will happen.
|
||||
On a second thought, the fact that the [[Bus-MObject|BusMO]] is rather void of any specific meaning doesn't weight so much: As the Builder is based on the visitor pattern, the individual objects can be seen as //algebraic data types.// Besides, there is at least one little bit of specific functionality: a Bus object actually needs to //claim//&nbsp; to be the OutputDesignation, by referring to the same ~Pipe-ID used in other parts of the model to request output routing to this Bus. Without this match on both ends, an ~OutputDesignation may be mentioned at will, but no connection whatsoever will happen.
|
||||
</pre>
|
||||
</div>
|
||||
<div title="GuiCommunication" modifier="Ichthyostega" modified="200812050555" created="200812050543" tags="GuiIntegration draft" changecount="2">
|
||||
|
|
@ -2878,12 +2878,12 @@ But because I know the opinions on this topc are varying (users tend to be delig
|
|||
My proposed aproach is to treat OpenGL as a separate video raw data type, requiring separete and specialized [[Processing Nodes|ProcNode]] for all calculations. Thus the Builder could connect OpenGL nodes if it is possible to cover the render path in whole or partially or maybe even just for preview.
|
||||
</pre>
|
||||
</div>
|
||||
<div title="OperationPoint" modifier="Ichthyostega" modified="200909041742" created="200805270334" tags="def impl Builder" changecount="8">
|
||||
<div title="OperationPoint" modifier="Ichthyostega" modified="201011071905" created="200805270334" tags="def impl Builder" changecount="9">
|
||||
<pre>A low-level abstraction within the [[Builder]] &mdash; it serves to encapsulate the details of making multi-channel connections between the render nodes: In some cases, a node can handle N channels internally, while in other cases we need to replicate the node N times and wire each channel individually. As it stands, the OperationPoint marks the ''borderline between high-level and low-level model'': it is invoked in terms of ~MObjects and other entities of the high-level view, but internally it manages to create ProcNode and similar entities of the low-level model.
|
||||
|
||||
The operation point is provided by the current BuilderMould and used by the [[processing pattern|ProcPatt]] executing within this mould and conducting the current build step. The operation point's interface allows //to abstract//&nbsp; these details, as well as to //gain additional control//&nbsp; if necessary (e.g. addressing only one of the channels). The most prominent build instruction used within the processing patterns (which is the instruction {{{"attach"}}}) relies on the aforementioned //approach of abstracted handling,// letting the operation point determine automatically how to make the connection.
|
||||
|
||||
This is possible because the operation point has been provided (by the mould) with informations about the media stream type to be wired, which, together with information accessible at the the [[render node interface|ProcNode]] and from the [[referred processing assets|ProcAsset]], with the help of the [[connection manager|ConManager]] allows to figure out what's possible and how to do the desired connections. Additionally, in the course of deciding about possible connections, the PathManager is consulted to guide strategic decisions regarding the [[render node configuration|NodeConfiguration]], possible type conversions and the rendering technology to employ.
|
||||
This is possible because the operation point has been provided (by the mould) with informations about the media stream type to be wired, which, together with information accessible at the [[render node interface|ProcNode]] and from the [[referred processing assets|ProcAsset]], with the help of the [[connection manager|ConManager]] allows to figure out what's possible and how to do the desired connections. Additionally, in the course of deciding about possible connections, the PathManager is consulted to guide strategic decisions regarding the [[render node configuration|NodeConfiguration]], possible type conversions and the rendering technology to employ.
|
||||
</pre>
|
||||
</div>
|
||||
<div title="OutputDesignation" modifier="Ichthyostega" modified="201009242320" created="201006220126" tags="Model draft design discuss" changecount="42">
|
||||
|
|
@ -2937,7 +2937,7 @@ While actually data frames are //pulled,// on a conceptual level data is assumed
|
|||
As both of these specifications are given by [[Pipe]]-~IDs, the actual designation information may be reduced. Much can be infered from the circumstances, because any pipe includes a StreamType, and an output designation for an incompatible stream type (e.g. and audio output when the pipe currently in question deals with video) is irrelevant.
|
||||
</pre>
|
||||
</div>
|
||||
<div title="OutputManagement" modifier="Ichthyostega" modified="201007190254" created="201007090155" tags="Model Rendering Player spec draft" changecount="17">
|
||||
<div title="OutputManagement" modifier="Ichthyostega" modified="201011072240" created="201007090155" tags="Model Rendering Player spec draft" changecount="18">
|
||||
<pre>//writing down some thoughts//
|
||||
|
||||
* ruled out the system outputs as OutputDesignation.
|
||||
|
|
@ -2953,7 +2953,7 @@ From the implementation side, the only interesting exit nodes are the ones to be
|
|||
* __playback__ always happens at a viewer element
|
||||
|
||||
!Attaching and mapping of exit nodes
|
||||
[[Output designations|OutputDesignation]] are created by using a [[Pipe]]-ID and &mdash; at the same time &mdash; by some object //claiming to root this pipe.// The applicability of this pattern is figured out dynamically while building the render network, resulting in a collection of model ports as part of the current [[Fixture]]. A RenderProcess can be started to pull from these active exit points of a given timeline. Besides, when the timeline enclosing these model ports is [[connected to a viewer|ViewerPlayConnection]], an //output network is built,// to allow hooking exit points to the viewer component. Both cases encompass a mapping of exit nodes to actual output channels. Usually, this mapping relies on relative addressing of the output sinks, starting connections at the "first of each kind".
|
||||
[[Output designations|OutputDesignation]] are created by using a [[Pipe]]-ID and &mdash; at the same time &mdash; by some object //claiming to root this pipe.// The applicability of this pattern is figured out dynamically while building the render network, resulting in a collection of model ports as part of the current [[Fixture]]. A RenderProcess can be started to pull from these active exit points of a given timeline. Besides, when the timeline enclosing these model ports is [[connected to a viewer|ViewerPlayConnection]], an [[output network|OutputNetwork]] //is built to allow hooking exit points to the viewer component.// Both cases encompass a mapping of exit nodes to actual output channels. Usually, this mapping relies on relative addressing of the output sinks, starting connections at the "first of each kind".
|
||||
|
||||
We should note that in both cases this mapping operation is controlled and driven by the output side of the connection: A viewer has fixed output capabilities, and rendering targets a specific container format, again with fixed and pre-settled channel configuration (when configurting a render process, it might be necessary to account for //possible kinds of output streams,// so to provide a sensible pre-selection of possible output container formats for the user to select from). Thus, as a starting point, we'll create a default configured mapping, assigning channels in order. This mapping then should be exposed to modification and tweaking by the user. For rendering, this is part of the render options dialog, while in case of a viwer connection, a switch board is created to allow modifying the default mapping.
|
||||
|
||||
|
|
@ -4934,7 +4934,7 @@ Consequently, as we can't get away with an fixed Enum of all stream prototypes,
|
|||
NTSC and PAL video, video versus digitized film, HD video versus SD video, 3D versus flat video, cinemascope versus 4:3, stereophonic versus monaural, periphonic versus panoramic sound, Ambisonics versus 5.1, dolby versus linear PCM...
|
||||
</pre>
|
||||
</div>
|
||||
<div title="StreamType" modifier="Ichthyostega" modified="201003160202" created="200808060244" tags="spec draft" changecount="16">
|
||||
<div title="StreamType" modifier="Ichthyostega" modified="201011071741" created="200808060244" tags="spec draft" changecount="21">
|
||||
<pre>//how to classify and describe media streams//
|
||||
Media data is supposed to appear structured as stream(s) over time. While there may be an inherent internal structuring, at a given perspective ''any stream is a unit and homogeneous''. In the context of digital media data processing, streams are always ''quantized'', which means they appear as a temporal sequence of data chunks called ''frames''.
|
||||
|
||||
|
|
@ -4949,13 +4949,13 @@ Media data is supposed to appear structured as stream(s) over time. While there
|
|||
! Problem of Stream Type Description
|
||||
Media types vary largely and exhibit a large number of different properties, which can't be subsumed under a single classification scheme. On the other hand we want to deal with media objects in a uniform and generic manner, because generally all kinds of media behave somewhat similar. But the twist is, these similarities disappear when describing media with logical precision. Thus we are forced into specialized handling and operations for each kind of media, while we want to implement a generic handling concept.
|
||||
|
||||
! Stream Type handling in the Proc-Layer
|
||||
! Lumiera Stream Type handling
|
||||
!! Identification
|
||||
A stream type is denoted by a StreamTypeID, which is an identifier, acting as an unique key for accessing information related to the stream type. It corresponds to an StreamTypeDescriptor record, containing an &mdash; //not necessarily complete// &mdash; specification of the stream type, according to the classification detailed below.
|
||||
|
||||
!! Classification
|
||||
Within the Proc-Layer, media streams are treated largely in a similar manner. But, looking closer, not everything can be connected together, while on the other hand there may be some classes of media streams which can be considered //equivalent// in most respects. Thus separating the distinction between various media streams into several levels seems reasonable...
|
||||
* Each media belongs to a fundamental ''kind'' of media, examples being __Video__, __Image__, __Audio__, __MIDI__,... Media streams of different kind can be considered somewhat "completely separate" &mdash; just the handling of each of those media kinds follows a common //generic pattern// augmented with specialisations. Basically, it is //impossible to connect// media streams of different kind. Under some circumstances there may be the possibility of a //transformation// though. For example, a still image can be incorporated into video, sound may be visualized, MIDI may control a sound synthesizer.
|
||||
* Each media belongs to a fundamental ''kind'' of media, examples being __Video__, __Image__, __Audio__, __MIDI__, __Text__,... <br/>Media streams of different kind can be considered somewhat "completely separate" &mdash; just the handling of each of those media kinds follows a common //generic pattern// augmented with specialisations. Basically, it is //impossible to connect// media streams of different kind. Under some circumstances there may be the possibility of a //transformation// though. For example, a still image can be incorporated into video, sound may be visualized, MIDI may control a sound synthesizer.
|
||||
* Below the level of distinct kinds of media streams, within every kind we have an open ended collection of ''prototypes'', which, when compared directly, may each be quite distinct and different, but which may be //rendered//&nbsp; into each other. For example, we have stereoscopic (3D) video and we have the common flat video lacking depth information, we have several spatial audio systems (Ambisonics, Wave Field Synthesis), we have panorama simulating sound systems (5.1, 7.1,...), we have common stereophonic and monaural audio. It is considered important to retain some openness and configurability within this level of distinction, which means this classification should better be done by rules then by setting up a fixed property table. For example, it may be desirable for some production to distinguish between digitized film and video NTSC and PAL, while in another production everything is just "video" and can be converted automatically. The most noticeable consequence of such a distinction is that any Bus or [[Pipe]] is always limited to a media stream of a single prototype. (&rarr; [[more|StreamPrototype]])
|
||||
* Besides the distinction by prototypes, there are the various media ''implementation types''. This classification is not necessarily hierarchically related to the prototype classification, while in practice commonly there will be some sort of dependency. For example, both stereophonic and monaural audio may be implemented as 96kHz 24bit PCM with just a different number of channel streams, but we may as well get a dedicated stereo audio stream with two channels multiplexed into a single stream. For dealing with media streams of various implementation type, we need //library// routines, which also yield a //type classification system.// Most notably, for raw sound and video data we use the [[GAVL]] library, which defines a classification system for buffers and streams.
|
||||
* Besides the type classification detailed thus far, we introduce an ''intention tag''. This is a synthetic classification owned by Lumiera and used for internal wiring decisions. Currently (8/08), we recognize the following intention tags: __Source__, __Raw__, __Intermediary__ and __Target__. Only media streams tagged as __Raw__ can be processed.
|
||||
|
|
@ -6625,13 +6625,40 @@ In case it's not already clear: we don't have "the" Render Engine, rat
|
|||
The &raquo;current setup&laquo; of the objects in the session is sort of a global state. Same holds true for the Controller, as the Engine can be at playback, it can run a background render or scrub single frames. But the whole complicated subsystem of the Builder and one given Render Engine configuration can be made ''stateless''. As a benefit of this we can run this subsystems multi-threaded without the need of any precautions (locking, synchronizing). Because all state information is just passed in as function parameters and lives in local variables on the stack, or is contained in the StateProxy which represents the given render //process// and is passed down as function parameter as well. (note: I use the term "stateless" in the usual, slightly relaxed manner; of course there are some configuration values contained in instance variables of the objects carrying out the calculations, but this values are considered to be constant over the course of the object usage).
|
||||
</pre>
|
||||
</div>
|
||||
<div title="Wiring" modifier="Ichthyostega" modified="201011070501" created="201009250223" tags="Concepts Model design draft" changecount="3">
|
||||
<pre>within Lumiera's Proc-Layer, on the conceptual level there are two kinds of connections: data streams and control connections. The wiring details on how these connections are defined and controlled, how they are to be detected by the builder and finally implemented by links in the RenderEngine.
|
||||
<div title="Wiring" modifier="Ichthyostega" modified="201011072303" created="201009250223" tags="Concepts Model design draft" changecount="27">
|
||||
<pre>within Lumiera's ~Proc-Layer, on the conceptual level there are two kinds of connections: data streams and control connections. The wiring details on how these connections are defined and controlled, how they are to be detected by the builder and finally implemented by links in the render engine.
|
||||
&rarr; see OutputManagement
|
||||
&rarr; see OutputDesignation
|
||||
&rarr; see OperationPoint
|
||||
&rarr; see StreamType
|
||||
|
||||
|
||||
!Stream connections
|
||||
A stream connection should result in media data traveling from a source to a sink. Here we have to distinguish between the high-level view and the situation in the render engine. At the session level and thus for the user, a //stream// is the elementary and distinguishable unit of "media content" flowing through the system. It can be described unabigously by having an uniform StreamType &mdash; this doesn't exclude the stream from being inherently structured, like containing several channels. The HighLevelModel can be interpreted as creating a system of stream connections, which can be categorised into two kinds of connections:
|
||||
* the [[pipes|Pipe]] are rather rigid processing chains, limited to using one specific StreamType
|
||||
* there are flexible interconnections or routing links, including the ability to sum or overlay media streams, and the possibility of stream type conversions.
|
||||
A stream connection should result in media data traveling from a source to a sink. Here we have to distinguish between the high-level view and the situation in the render engine. At the session level and thus for the user, a //stream// is the elementary unit of "media content" flowing through the system. It can be described unambigously by having an uniform StreamType &mdash; this doesn't exclude the stream from being inherently structured, like containing several channels. The HighLevelModel can be interpreted as creating a system of stream connections, which can be categorised as yielding two kinds of connections:
|
||||
* the [[Pipes|Pipe]] are rather rigid ''processing chains'', limited to using one specific StreamType. Pipes are build by attaching processing descriptors (Effect ~MObjects), where the order of attachment is given by the placement.
|
||||
* there are flexible interconnections or ''routing links'', including the ability to sum or overlay media streams, and the possibility of stream type conversions. They are controlled by the OutputDesignation ("wiring plug"), to be queried from the placement at the source side of the interconnection (i.e. at the exit point of a pipe)
|
||||
Please note, the high-level model comprises a blue print for constructing the render engine. There is no data "flowing" through this model, thus any "wiring" may be considered conceptual. Any wiring specifications here just express the intention of getting things connected in a given way. Like e.g. a clip may find out (through query of his placement) that he's intended to produce output for some destination / bus / subgroup called "XYZ"
|
||||
|
||||
The builder to the contrary considers matters locally. He's confined to a given set of objects handed in for processing, and during that processing will collect all [[plugs|WiringPlug]] and [[claims|WiringClaim]] encountered. While the former denote the desired //destination//&nbsp; of data emerging from a pipe, the wiring claim expresses the fact that a given object //claims to be some output destination.// Typically, each [[global pipe or bus|GlobalPipe]] raises such a claim. Both plugs and claims are processed on a "just in case they exist" base: When encountering a plug and //resolving//&nbsp; the corresponding claim, the builder drops off a WiringRequest accordingly. Note the additional resolution, which might be necessary due to virtual media and nested sequences (read: the output designation might need to be translated into another designation, using the media/channel mapping created by the virtual entity &rarr; see [[mapping of output designations|OutputDesignation]])
|
||||
|
||||
Processing the wiring request drives the actual connection step. It is conducted by the OperationPoint, provided by and executing within a BuilderMould and controlled by a [[processing pattern|ProcPatt]]. This rather flexible setup allows for wiring summation lines, include faders, scale and offset changes and various overlay modes. Thus the specific form of the connection wired in here depends on all the local circumstances visible at that point of operation:
|
||||
* the mould is picked from a small number of alternatives, based on the general wiring situation.
|
||||
* the processing pattern is queried to fit the mould, the stream type and additional placement specifications ({{red{TODO 11/10: work out details}}})
|
||||
* the stream type system itself contributes in determining possible connections and conversions, introducing further processing patterns
|
||||
|
||||
The final result, within the render engine, is a network of processing nodes. Each of this nodes holds a WiringDescriptor, created as a result of the wiring operation detailed above. This descriptor lists the predecessors, and (in somewhat encoded form) the other details necessary for the processing node to respond properly at the engine's calculation requests (read: those details are implementation bound and can be expeted to be made to fit)
|
||||
|
||||
On a more global level, this LowLevelModel within the engine exposes a number of [[exit nodes|ExitNode]], each corresponding to a ModelPort, thus being a possible source to be handled by the OutputManager, which is responsible for mapping and connecting nominal outputs (the model ports) to actual output sinks (external connections and viewer windows). A model port isn't necessarily an absolute endpoint of connected processing nodes &mdash; it may as well reside in the middle of the network, e.g. as a ProbePoint. Besides the core engine network, there is also an [[output network|OutputNetwork]], built and extended on demand to prepare generated data for the purpose of presentation. This might be scaling or interpolating video for a viewer, adding overlays with control information produced by plugins, or rendering and downmixing multichannel sound. By employing this output network, the same techniques used to control wiring of the main path, can be extended to control this output preparation step. ({{red{WIP 11/20}}} some important details need to be settled here, like how to control semi-automatic adaptation steps. But that is partially true also for the main network: for example, we don't know where to locate and control the faders generated as a consequence of building a summation line)
|
||||
|
||||
!!!Participants and Requirements
|
||||
* the ~Pipe-ID needs to be something easily usable
|
||||
* output designation is just a ~Pipe-ID, actually a thin wrapper to make the intention explicit
|
||||
* claims and plugs are to be implemented as LocatingPin
|
||||
* as decided elsewhere, we get a [[Bus-MObject|BusMO]] as an attachment point
|
||||
* we need [[Mapping]] as a new generic interface, allowing us to define ~APIs in terms of mappings
|
||||
* builder moulds, operation point and processing patterns basically are in place, but need to be shaped
|
||||
* the actual implementation in the engine will evolve as we go; necessary adaptations are limited to the processing patterns (which is intentional)
|
||||
|
||||
|
||||
!Control connections
|
||||
</pre>
|
||||
|
|
|
|||
Loading…
Reference in a new issue