planning next steps: how to transform DummyPlayer into a real player
This commit is contained in:
parent
ee4e6905d2
commit
687102feb3
3 changed files with 28 additions and 18 deletions
|
|
@ -98,9 +98,12 @@ namespace play {
|
|||
|
||||
public:
|
||||
|
||||
/** building a Feed effectively involves the EngineService
|
||||
* to establish an actual rendering plan. Which is abstracted
|
||||
* here through the RenderConfigurator instance */
|
||||
/** building a Feed effectively requires the definition
|
||||
* of a rendering plan through the EngineService.
|
||||
* @param CalcStreams definition of the individual
|
||||
* calculation "continuations" for the engine.
|
||||
* These correspond to already running
|
||||
* render calculations. */
|
||||
Feed (engine::CalcStreams const&);
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -79,7 +79,7 @@ namespace play {
|
|||
// using std::tr1::placeholders::_1;
|
||||
// using lib::transform;
|
||||
|
||||
class DefaultRenderProcessBuilder
|
||||
class LumieraRenderProcessBuilder
|
||||
: public RenderConfigurator
|
||||
{
|
||||
|
||||
|
|
@ -112,7 +112,7 @@ namespace play {
|
|||
|
||||
|
||||
public:
|
||||
DefaultRenderProcessBuilder(POutputManager outputManager, Timings playbackSpeed)
|
||||
LumieraRenderProcessBuilder (POutputManager outputManager, Timings playbackSpeed)
|
||||
: outputResolver_(outputManager)
|
||||
, playbackTimings_(playbackSpeed)
|
||||
, renderQuality_(EngineService::QoS_DEFAULT)
|
||||
|
|
@ -128,7 +128,7 @@ namespace play {
|
|||
|
||||
|
||||
/** @internal this builder function is used by the PlayService
|
||||
* when it comes to creating a new PlayProcess. The generated RenderConfigurator
|
||||
* when it comes to creating a new PlayProcess. The generated ConnectFunction
|
||||
* embodies the specific knowledge how to configure and setup the rendering or
|
||||
* playback at the EngineFacade, based on the general playback speed and
|
||||
* quality desirable for this playback process to be initiated.
|
||||
|
|
@ -143,7 +143,9 @@ namespace play {
|
|||
RenderConfigurator::ConnectFunction
|
||||
buildRenderConfiguration (POutputManager outputPossibilities, Timings playbackTimings)
|
||||
{
|
||||
shared_ptr<RenderConfigurator> specialConfig (new DefaultRenderProcessBuilder (outputPossibilities, playbackTimings));
|
||||
/////////////////////////////////////////////TODO this is the point to inject a Dummy implementation or anything bypassing the Lumiera engine!
|
||||
|
||||
shared_ptr<RenderConfigurator> specialConfig (new LumieraRenderProcessBuilder (outputPossibilities, playbackTimings));
|
||||
|
||||
return bind (&RenderConfigurator::buildActiveFeed, specialConfig, _1 );
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1778,15 +1778,20 @@ The Method of pushing frames to the Viewer widget is changed fundamentally durin
|
|||
|
||||
</pre>
|
||||
</div>
|
||||
<div title="DisplayFacade" modifier="Ichthyostega" modified="200904302316" created="200902080703" tags="spec" changecount="2">
|
||||
<div title="DisplayFacade" modifier="Ichthyostega" modified="201201150238" created="200902080703" tags="spec" changecount="4">
|
||||
<pre>LayerSeparationInterface provided by the GUI.
|
||||
Access point especially for the playback. A render- or playback process uses the DisplayFacade to push media data up to the GUI for display within a viewer widget of full-screen display. This can be thought off as a callback mechanism. In order to use the DisplayFacade, client code needs a DisplayerSlot (handle), which needs to be set up by the UI first and will be provided when starting the render or playback process.
|
||||
|
||||
!Evolving the real implementation {{red{TODO 1/2012}}}
|
||||
As it stands, the ~DisplayFacade is a placeholder for parts of the real &rarr; OutputManagement, to be implemented in conjunction with the [[player subsystem|Player]] and the render engine. As of 1/2012, the intention is to turn the DisplayService into an OutputSlot instance -- following this line of thought, the ~DisplayFacade might become some kind of OutputManager, possibly to be [[implemented within a generic Viewer element|ViewerPlayConnection]]
|
||||
</pre>
|
||||
</div>
|
||||
<div title="DisplayService" modifier="Ichthyostega" created="200902080711" tags="def" changecount="1">
|
||||
<div title="DisplayService" modifier="Ichthyostega" modified="201201150238" created="200902080711" tags="def" changecount="3">
|
||||
<pre>A service within the GUI to manage output of frames generated by the lower layers of the application.
|
||||
*providing the actual implementation of the DisplayFacade
|
||||
*creating and maintaining of [[displayer slots|DisplayerSlot]], connected to viewer widgets or similar
|
||||
|
||||
{{red{TODO 1/2012}}} to be generalised into a service exposing an OutputSlot for display.
|
||||
</pre>
|
||||
</div>
|
||||
<div title="DisplayerSlot" modifier="Ichthyostega" modified="200902080707" created="200902080705" tags="def" changecount="2">
|
||||
|
|
@ -4517,10 +4522,10 @@ Yet it addresses some central concerns:
|
|||
The player subsystem is currently about to be designed and built up; some time ago, __Joel Holdsworth__ and __Ichthyo__ did a design study with a PlayerDummy, which is currently hooked up with the TransportControl in the Lumiera GUI. Starting from these experiences, and the general requirements of an NLE, the [[design of the Player subsystem|DesignPlayerSubsystem]] is being worked out.
|
||||
</pre>
|
||||
</div>
|
||||
<div title="PlayerDummy" modifier="Ichthyostega" modified="200906071810" created="200901300209" tags="GuiIntegration operational img" changecount="17">
|
||||
<pre>__Joelholdsworth__ and __Ichthyo__ created this player mockup in 1/2009 to find out about the implementation details regarding integration and colaboration between the layers. There is no working render engine yet, thus we use a ~DummyImageGenerator for creating faked yuv frames to display. Within the GUI, there is a ~PlaybackController hooked up with the transport controls on the timeline pane.
|
||||
<div title="PlayerDummy" modifier="Ichthyostega" modified="201201150237" created="200901300209" tags="GuiIntegration operational img" changecount="21">
|
||||
<pre>__Joelholdsworth__ and __Ichthyo__ created this player mockup in 1/2009 to find out about the implementation details regarding integration and colaboration between the layers. There is no working render engine yet, thus we use a ~DummyImageGenerator for creating faked yuv frames to display. Within the GUI, there is a ~PlaybackController hooked up with the transport controls on the timeline pane.
|
||||
# first everything was contained within ~PlaybackController, which spawns a thread for periodically creating those dummy frames
|
||||
# then, a ~PlayerService was factored out, now implemented within Proc-Layer (probably to be relocated into the backend for the final version). A new LayerSeparationInterface called ''~DummyPlayer'' was created and set up as a [[Subsystem]] within main().
|
||||
# then, a ~PlayerService was factored out, now implemented within ~Proc-Layer (later to delegate to the emerging real render engine implementation).<br/>A new LayerSeparationInterface called ''~DummyPlayer'' was created and set up as a [[Subsystem]] within main().
|
||||
# the next step was to support multiple playback processes going on in parallel. Now, the ~PlaybackController holds an smart-handle to the ~PlayProcess currently generating output for this viewer, and invokes the transport control functions and the pull frame call on this handle.
|
||||
# then, also the tick generation (and thus the handling of the thread which pulls the frames) was factored out and pushed down into the mentioned ~PlayProcess. For this to work, the ~PlaybackController now makes a display slot available on the public GUI DisplayFacade interface, so the ~PlayProcessImpl can push up the frames for display within the GUI
|
||||
[img[Overview to the dummy player operation|draw/PlayerArch1.png]]
|
||||
|
|
@ -4533,7 +4538,7 @@ When starting playback, the display slot handle created by these preparations is
|
|||
* this uses the provided slot handle to actually //allocate// the display slot via the Display facade. Here, //allocating// means registering and preparing it for output by //one single// ~PlayProcess. For the latter, this allocation yields an actually opened display handle.
|
||||
* moreover, the ~PlayProcessImpl aquires an TickService instance, which is still trotteling (not calling the periodic callback)
|
||||
* probably, a real player at this point would initiate a rendering process, so he can fetch the actual output frames periodically.
|
||||
* on the "upper" side of the ~DummyPlayer facade, an lib::Handle object is created to track and manage this ~PlayProcesImpl instance
|
||||
* on the "upper" side of the ~DummyPlayer facade, a lib::Handle object is created to track and manage this ~PlayProcesImpl instance
|
||||
The mentioned handle is returned to the ~PlaybackController within the GUI, which uses this handle for all further interactions with the Player. The handle is ref counting and has value semantics, so it can be stored away, passed as parameter and so on. All such handles corresponding to one ~PlayProcess form a family; when the last goes out of scope, the ~PlayProcess terminates and deallocates any resources. Conceptually, this corresponds to pushing the "stop" button. Handles can be deliberately disconnected by calling {{{handle.close()}}} &mdash; this has the same effect as deleting a handle (when all are closed or deleted the process ends).
|
||||
|
||||
All the other play control operations are simply forwarded via the handle and the ~PlayProcessImpl. For example, "pause" corresponds to setting the tick frequency to 0 (thus temporarily discontinuing the tick callbacks). When allocating the display slot in the course of creating the ~PlayProcessImpl, the latter only accesses the Display facade. It can't access the display or viewer directly, because the GUI lives within an plugin; lower layers aren't allowed to call GUI implementation functions directly. Thus, within the Display facade a functor (proxy) is created to represent the output sink. This (proxy) Displayer can be used within the implementation of the perodic callback function. As usual, the implementation of the (proxy) Displayer can be inlined and doesn't create runtime overhead. Thus, each frame output call has to pass though two indirections: the function pointer in the Display facade interface, and the Glib::Dispatcher.
|
||||
|
|
@ -7743,8 +7748,8 @@ In case it's not already clear: we don't have "the" Render Engine, rat
|
|||
The &raquo;current setup&laquo; of the objects in the session is sort of a global state. Same holds true for the Controller, as the Engine can be at playback, it can run a background render or scrub single frames. But the whole complicated subsystem of the Builder and one given Render Engine configuration can be made ''stateless''. As a benefit of this we can run this subsystems multi-threaded without the need of any precautions (locking, synchronizing). Because all state information is just passed in as function parameters and lives in local variables on the stack, or is contained in the StateProxy which represents the given render //process// and is passed down as function parameter as well. (note: I use the term "stateless" in the usual, slightly relaxed manner; of course there are some configuration values contained in instance variables of the objects carrying out the calculations, but this values are considered to be constant over the course of the object usage).
|
||||
</pre>
|
||||
</div>
|
||||
<div title="Wiring" modifier="Ichthyostega" modified="201011090249" created="201009250223" tags="Concepts Model design draft" changecount="30">
|
||||
<pre>within Lumiera's ~Proc-Layer, on the conceptual level there are two kinds of connections: data streams and control connections. The wiring details on how these connections are defined and controlled, how they are to be detected by the builder and finally implemented by links in the render engine.
|
||||
<div title="Wiring" modifier="Ichthyostega" modified="201201150238" created="201009250223" tags="Concepts Model design draft" changecount="36">
|
||||
<pre>within Lumiera's ~Proc-Layer, on the conceptual level there are two kinds of connections: data streams and control connections. The wiring deals with how to define and control these connections, how they are to be detected by the builder and finally implemented by links in the render engine.
|
||||
&rarr; see OutputManagement
|
||||
&rarr; see OutputDesignation
|
||||
&rarr; see OperationPoint
|
||||
|
|
@ -7752,14 +7757,14 @@ The &raquo;current setup&laquo; of the objects in the session is sort of
|
|||
|
||||
|
||||
!Stream connections
|
||||
A stream connection should result in media data traveling from a source to a sink. Here we have to distinguish between the high-level view and the situation in the render engine. At the session level and thus for the user, a //stream// is the elementary unit of "media content" flowing through the system. It can be described unambigously by having an uniform StreamType &mdash; this doesn't exclude the stream from being inherently structured, like containing several channels. The HighLevelModel can be interpreted as creating a system of stream connections, which can be categorised as yielding two kinds of connections:
|
||||
A stream connection should result in media data traveling from a source to a sink. Here we have to distinguish between the high-level view and the situation in the render engine. At the session level and thus for the user, a //stream// is the elementary unit of "media content" flowing through the system. It can be described unambigously by having an uniform StreamType &mdash; this doesn't exclude the stream from being inherently structured, like containing several channels. The HighLevelModel can be interpreted as //creating a system of stream connections,// which, more specifically can be categorised into two kinds or types of connections -- the rigit and the flexible parts:
|
||||
* the [[Pipes|Pipe]] are rather rigid ''processing chains'', limited to using one specific StreamType. Pipes are build by attaching processing descriptors (Effect ~MObjects), where the order of attachment is given by the placement.
|
||||
* there are flexible interconnections or ''routing links'', including the ability to sum or overlay media streams, and the possibility of stream type conversions. They are controlled by the OutputDesignation ("wiring plug"), to be queried from the placement at the source side of the interconnection (i.e. at the exit point of a pipe)
|
||||
Please note, the high-level model comprises a blue print for constructing the render engine. There is no data "flowing" through this model, thus any "wiring" may be considered conceptual. Any wiring specifications here just express the intention of getting things connected in a given way. Like e.g. a clip may find out (through query of his placement) that he's intended to produce output for some destination / bus / subgroup called "XYZ"
|
||||
Please note, the high-level model comprises a blue print for constructing the render engine. There is no real data "flowing" through this model, thus any "wiring" may be considered conceptual. Within this context, any wiring specifications just express //the intention of getting things connected in a specific way.// Consider the example of a clip, which might find out (through query of his placement) that he's intended to produce output for some destination or bus or subgroup called "XYZ"
|
||||
|
||||
The builder to the contrary considers matters locally. He's confined to a given set of objects handed in for processing, and during that processing will collect all [[plugs|WiringPlug]] and [[claims|WiringClaim]] encountered. While the former denote the desired //destination//&nbsp; of data emerging from a pipe, the wiring claim expresses the fact that a given object //claims to be some output destination.// Typically, each [[global pipe or bus|GlobalPipe]] raises such a claim. Both plugs and claims are processed on a "just in case they exist" base: When encountering a plug and //resolving//&nbsp; the corresponding claim, the builder drops off a WiringRequest accordingly. Note the additional resolution, which might be necessary due to virtual media and nested sequences (read: the output designation might need to be translated into another designation, using the media/channel mapping created by the virtual entity &rarr; see [[mapping of output designations|OutputMapping]])
|
||||
|
||||
Processing the wiring request drives the actual connection step. It is conducted by the OperationPoint, provided by and executing within a BuilderMould and controlled by a [[processing pattern|ProcPatt]]. This rather flexible setup allows for wiring summation lines, include faders, scale and offset changes and various overlay modes. Thus the specific form of the connection wired in here depends on all the local circumstances visible at that point of operation:
|
||||
The Processing os such a wiring request drives the actual connection step. It is conducted by the OperationPoint, provided by and executing within a BuilderMould and controlled by a [[processing pattern|ProcPatt]]. This rather flexible setup allows for wiring summation lines, include faders, scale and offset changes and various overlay modes. Thus the specific form of the connection wired in here depends on all the local circumstances visible at that point of operation:
|
||||
* the mould is picked from a small number of alternatives, based on the general wiring situation.
|
||||
* the processing pattern is queried to fit the mould, the stream type and additional placement specifications ({{red{TODO 11/10: work out details}}})
|
||||
* the stream type system itself contributes in determining possible connections and conversions, introducing further processing patterns
|
||||
|
|
|
|||
Loading…
Reference in a new issue