OutputSlot: draft buffer handover protocol, remove the diferent models
This commit is contained in:
parent
ea8841b7ad
commit
650e73c454
2 changed files with 17 additions and 14 deletions
|
|
@ -65,16 +65,17 @@ namespace play {
|
|||
/** established output channel */
|
||||
class Connection;
|
||||
|
||||
class BufferHandoverSink
|
||||
: public lib::Handle<Connection>
|
||||
class BufferProvider
|
||||
{
|
||||
|
||||
public:
|
||||
void emit(Time, BuffHandle);
|
||||
~BufferProvider() { }
|
||||
|
||||
BuffHandle lockBufferFor(Time);
|
||||
};
|
||||
|
||||
|
||||
class SharedBufferSink
|
||||
class DataSink
|
||||
: public lib::Handle<Connection>
|
||||
{
|
||||
|
||||
|
|
@ -96,14 +97,10 @@ namespace play {
|
|||
public:
|
||||
virtual ~OutputSlot();
|
||||
|
||||
typedef lib::IterSource<BufferHandoverSink>::iterator Opened_BufferHandoverSinks;
|
||||
typedef lib::IterSource<SharedBufferSink>::iterator Opened_SharedBufferSinks;
|
||||
typedef lib::IterSource<DataSink>::iterator OpenedSinks;
|
||||
|
||||
template<class SINK>
|
||||
struct Allocation
|
||||
{
|
||||
typedef typename lib::IterSource<SINK>::iterator OpenedSinks;
|
||||
|
||||
OpenedSinks getOpenedSinks();
|
||||
|
||||
bool isActive();
|
||||
|
|
@ -114,7 +111,7 @@ namespace play {
|
|||
|
||||
bool isFree() const;
|
||||
|
||||
Allocation<BufferHandoverSink>
|
||||
Allocation
|
||||
allocate();
|
||||
|
||||
private:
|
||||
|
|
|
|||
|
|
@ -3313,7 +3313,7 @@ Thus the mapping is a copyable value object, based on a associative array. It ma
|
|||
First and foremost, mapping can be seen as a //functional abstraction.// As it's used at implementation level, encapsulation of detail types in't the primary concern, so it's a candidate for generic programming: For each of those use cases outlined above, a distinct mapping type is created by instantiating the {{{OutputMapping<DEF>}}} template with a specifically tailored definition context ({{{DEF}}}), which takes on the role of a strategy. Individual instances of this concrete mapping type may be default created and copied freely. This instantiation process includes picking up the concrete result type and building a functor object for resolving on the fly. Thus, in the way typical for generic programming, the more involved special details are moved out of sight, while being still in scope for the purpose of inlining. But there //is// a concern better to be encapsulated and concealed at the usage site, namely accessing the rules system. Thus mapping leads itself to the frequently used implementation pattern where there is a generic frontend as header, calling into opaque functions embedded within a separate compilation unit.
|
||||
</pre>
|
||||
</div>
|
||||
<div title="OutputSlot" modifier="Ichthyostega" modified="201106270108" created="201106162339" tags="def Concepts Player spec" changecount="14">
|
||||
<div title="OutputSlot" modifier="Ichthyostega" modified="201107042247" created="201106162339" tags="def Concepts Player spec" changecount="18">
|
||||
<pre>Within the Lumiera player and output subsystem, actually sending data to an external output requires to allocate an ''output slot''
|
||||
This is the central metaphor for the organisation of actual (system level) outputs; using this concept allows to separate and abstract the data calculation and the organisation of playback and rendering from the specifics of the actual output sink. Actual output possibilities can be added and removed dynamically from various components (backend, GUI), all using the same resolution and mapping mechanisms (&rarr; OutputManagement)
|
||||
|
||||
|
|
@ -3322,10 +3322,10 @@ Each OutputSlot is an unique and distinguishable entity. It corresponds explicit
|
|||
|
||||
In order to be usable as //output sink,// an output slot needs to be //allocated,// i.e. tied to and locked for a specific client. At any time, there may be only a single client using a given output slot this way. To stress this point: output slots don't provide any kind of inherent mixing capability; any adaptation, mixing, overlaying and sharing needs to be done within the nodes network producing the output data fed to the slot. (yet some special kinds of external output capabilities -- e.g. the Jack audio connection system -- may still provide additional mixing capabilities, but that's beyond the scope of the Lumiera application)
|
||||
|
||||
Once allocated, the output slot returns a set of concrete ''sink handles'' (one for each physical channel expecting data). The size and other characteristics of the data frames is assumed to be suitable. Typically this won't be verified at that level anymore (but the sink handle provides a hook for assertions). Besides that, the allocation of an output slot reveals detailed ''timing expectations''. The client is required to comply to these timings when ''emitting'' data -- he's even required to provide a //current time specification,// alongside with the data. Yet the output slot has the ability to handle timing failures gracefully; the concrete output slot implementation is expected to provide some kind of de-click or de-flicker facility, which kicks in automatically when a timing failure is detected.
|
||||
Once allocated, the output slot returns a set of concrete ''sink handles'' (one for each physical channel expecting data). The calculating process feeds its results into those handles. Size and other characteristics of the data frames is assumed to be suitable, which typically won't be verified at that level anymore (but the sink handle provides a hook for assertions). Besides that, the allocation of an output slot reveals detailed ''timing expectations''. The client is required to comply to these timings when ''emitting'' data -- he's even required to provide a //current time specification,// alongside with the data. Yet the output slot has the ability to handle timing failures gracefully; the concrete output slot implementation is expected to provide some kind of de-click or de-flicker facility, which kicks in automatically when a timing failure is detected.
|
||||
|
||||
!!!data exchange models
|
||||
Data is handed over by the client invoking an {{{emit(time,...)}}} function on the sink handle. There are two different models how this data hand-over might be performed. On allocation of a slot, the client has to commit to one of them, allowing the output slot to adapt accordingly
|
||||
Data is handed over by the client invoking an {{{emit(time,...)}}} function on the sink handle. Theoretically there are two different models how this data hand-over might be performed. This corresponds to the fact, that in some cases our own code manages the output and the buffers, while in other situations we intend to use existing library solutions or even external server applications to handle output
|
||||
;buffer handover model
|
||||
:the client owns the data buffer and cares for allocation and de-allocation. The {{{emit()}}}-call just propagates a pointer to the buffer holding the data ready for output. The output slot implementation in turn has the liability to copy or otherwise use this data within a given time limit.
|
||||
;shared buffer model
|
||||
|
|
@ -3354,10 +3354,16 @@ Solving this problem through //generic programming// -- i.e coding both cases ef
|
|||
:when creating individual jobs, we utilise a //factory optained from the output slot.//
|
||||
;unified
|
||||
:extend and adapt the protocol such to make both models similar; concentrate all differences //within a separate buffer provider.//
|
||||
!!!!discussion
|
||||
!!!discussion
|
||||
the generic approach looks as it's becoming rather convoluted in practice. We'd need to hand over additional parameters to the factory, which passes them through to the actual job implementation created. And there would be a coupling between slot and job (the slot is aware it's going to be used by a job, and even provides the implementation). Obviously, a benefit is that the actual code path executed within the job is without indirections, and all written down in a single location. Another benefit is the possibility to extend this approach to cover further buffer handling models -- it doesn't pose any requirements on the structure of the buffer handling.
|
||||
If we accept to retrieve the buffer(s) via an indirection, which we kind of do anyway //within the render node implementation// -- the unified model looks more like a clean solution. It's more like doing away with some local optimisations possible if we handle the models explicitly, so it's not much of a loss, given that the majority of the processing time will be spent within the inner pixel calulation loops for frame processing anyway. When following this approach, the buffer provider becomes a third, independent partner, and the slot cooperates tightly with this buffer provider, while the client (processing node) still just talks to the slot. Basically, this unified solution is like extending the shared buffer model to both cases.
|
||||
&rArr; conclusion: go for the unified approach!
|
||||
|
||||
!!!unified data exchange cycle
|
||||
The nominal time of a frame to be delivered is used as an ID throughout that cycle
|
||||
# within a defined time window prior to delivery, the client can retrieve the buffer from the ''buffer provider''.
|
||||
# the client has to ''emit'' within a (short) time window pior to deadline
|
||||
# now the slot gets exclusive access to the buffer for output, signalling the buffer release to the buffer provider when done.
|
||||
</pre>
|
||||
</div>
|
||||
<div title="Overview" modifier="Ichthyostega" modified="200906071810" created="200706190300" tags="overview img" changecount="13">
|
||||
|
|
|
|||
Loading…
Reference in a new issue