DOC: requirement analysis of playback modes

This commit is contained in:
Fischlurch 2013-01-19 23:29:10 +01:00
parent 30409e66bd
commit 9f7e229a12
3 changed files with 267 additions and 5 deletions

View file

@ -5,11 +5,64 @@ Design: Output Handling
//Menu: label Output handling
Some ideas....
The topic of proper output handling relates several subsystems and re-appears on
various levels of abstraction within the application design and implementation.
output format::
the availability of a specific output sink is prerequisite for supporting a
given format, configuration or frame rate. It might be desitable to prepare
the same edit for multiple output formats, which can be done by using the same
top-level sequence within several timelines with suitably output configuration.
number of output streams::
the concrete structure of objects within the session creates the ability to deliver
multiple streams of output data. On the other hand, using a given sequence as virtual
clip within another sequence creates the demand for a distinct number of output designations
actual system configuration::
working on the same edit on multiple systems creates the challenge to adapt to
different output possibilities. Since studio setups can be involved, we need to recall
a very special setup, even when it can not be used all the time on the current system.
adapting and transforming::
since the actual output designation is resolved in several steps, it might become necessary
to adapt or transform media streams to be rendered into a provided output sink. For example,
a sound panning device might become necessary to render one sound representation system
into another one. Since the resolution of output is driven by context information,
the necessity of such transformations can arise any time and might be treated
at a location remote from the actual source of the change.
timeline viewer connection::
any kind of playback requires the colaboration of some model content, a viewer device
and the render engine. For example, only after connecting a timeline to a viewer, an actual
playback position and play control becomes available.
viewer switch board::
the typical viewer in the GUI includes a switch board to choose between various probe points
and source during playback. Thus, establishing a timeline viewer connection both adds new
sources to the switch board and necessitates to build an additional output connection and
transformation network behind the scenes.
output sinks::
while each output provides some kind of destination to dispose buffers with prepared media data,
unfortunately, the various output systems and support libraries come with a wide array of often
incompatible assumptions and protocols for the client to comply.
playback modes::
supporting the various kinds of playback modes makes the engine require some support from the
actual output device, e.g. for freezing a frame, for treating delivery glitches gracefully, or
to inhibit further output during reconfiguration stages.
Basic design principles...
--------------------------
- abstract away the actual technology used for output
- have generic *output designations* and translate them into an *output slot*
- the OutputSlot interface can be designed to match the requirements of the Engine
- assume a mechanism to handle timeouts, glitches and skips within each concrete OutputSlot implementation
Please refer to the more technical and in-depth
link:/wiki/renderengine.html#OutputDesignation%20OutputSlot%20OutputManagement%20OutputMapping%20Wiring[
discussion in the TiddlyWiki]

View file

@ -0,0 +1,139 @@
Analysis: Modes of Playback
===========================
:Date: January 2013
:Author: Ichthyostega
The *player* -- shorthand for ``playback and render control subsystem''
is responsible for establishing and coordinating the processes and services
necessary for generating continuous output or rendered media. The actual calculations
take place within the render engine, while the player instructs the engine in case of
any modification regarding the _parameters_ of timed data delivery. More specifically,
there is a *play controller* (state machine) for each _instance_ of playback, managing
an allocated *output slot* with a set of *output sinks*, plus a number of *calculation streams*,
which in turn are cooperating with the *frame dispatcher* and the *scheduler interface*. The
latter acts as low-level interface to the actual rendering operations, since it allows to
schedule individual *frame jobs* together with their prerequisite dependencies. This
setup creates a pattern of cooperation, with the goal of delivering a continuous stream
of calculated data to the output sinks. Specific changes within the parametrisation of
that cooperation pattern result in an overall changed mode of playback or rendering.
Rendering of final results
--------------------------
The classic case of ``rendering the results'' is treated within this framework as a special
case of _generic play-back_: Any frames are to be delivered when done with the calculation,
without any timing constraints, while, on the other hand, no glitches or quality compromises
whatsoever are acceptable. Also, the play controller exposes only limited abilities: in case of
rendering, all we need is the ability to pause and abort the process, plus an progress indicator.
Regular playback
----------------
The *standard case of playback* delivers a sequence of media data frames at a given frame rate,
with simple _linear temporal progression_. Typically, there are multiple playback feeds
linked together, each delivering a specific kind of media at a specific frame or block rate,
while being coordinated by a single play controller and working to a common delivery time goal
(``play head''). Which means in turn, that each individual frame has to be delivered within
a pre planned time window. Lumiera employs an elaborate and precise timing control, based on
triggering small atomic chunks of work in a ``just in time'' manner -- there is _no_ generic
and built-in buffering beyond the low-level double buffering mechanisms utilised by most
output facilities. Generally speaking, we prefer precise beforehand planning and discarding
of untimely results over demand-driven and possibly blocking operations. Yet the buffer
management and frame cache provided by the backend for storing and passing of intermediary
and final results allows for a certain amount of leeway.
Non-standard playback modes
---------------------------
The requirements of a software player, especially in the context of an editing application,
call for support of several specific operation modes of playback -- which can be characterised
by breaking with the rule of simple and linear progression in timed delivery. The following
requirement analysis defines our understanding and implementation approach towards these
mandatory features.
Requirement analysis of playback modes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For the task of editing a piece of media, the following modes of playback presentation can be determined +
See also the more
link:wiki/renderengine.html#NonLinearPlayback%20PlayService%20PlayProcess%20CalcStream%20FrameDispatcher%20OutputManagement[
technical discussion of playback in the TiddlyWiki]
regular playback::
In this operation mode, some media or the arrangement on one of the timelines will be presented
in the intended order and with the nominal speed, possibly somewhat adjusted for the display
or monitoring device employed. There is a running time code display and an animated presentation
of the current nominal playback position (``play head'', ``cursor'', ``locator'') in the GUI.
Generally speaking, within Lumiera this playback position isn't incorporated as a specific
device -- rather it is treated as a conceptual mapping of nominal time to wall clock time.
jumping::
creates a discontinuity in _nominal time,_ while the progress of _real wall clock time deadlines_
remains unaffected. We need to distinguish two kinds of _jumps_ or _skips_:
+
* a _pre planned jump_ can be worked into the progression of frames just like in normal progression.
Such a jump might be due to a _skip mark_ in the timeline, or it may be used to implement looped playback.
* to the contrary, a _spontaneous re-adjustment of playback position_ is caused by unpredictable external
events, like e.g. user interaction or remote control. Such an unpredictable change in the playback plan
deprives the engine of any delivery headroom; to allow for catch-up to timed delivery, a configurable
_slippage offset_ can be added to the newly established real time deadlines after the skip, in order
to prevent drop-outs.
+
Since each skip might create an output discontinuity, a ``de-clicking'' or ``de-flickering'' device can be
expected to intercept on the individual output connections to improve the usage experience while exploring media.
looping::
In looped playback the nominal playback position jumps back to the starting point, after travelling over
a pre defined looping time span (``looping window''). This looping time span can be re-adjusted while in
playback, with immediate effect on the output. A fluid handling of loop playback is extremely important
for all kinds of fine tuning work. The loop boundaries may be chosen arbitrary, not necessarily aligned
to any frame grid (the fact this might cause some irregularities on the presented frame sequence is
acknowledged, yet it is deemed acceptable and preferred over employing any kind of additional
interpolation device not present in normal playback). +
Looped playback can be conceived as a series of regularly scheduled jumps.
pausing::
_paused playback_ represents a special state of the player and engine, where we expect playback to be able
to commence right away, with minimal latency. Contrast this to entering playback from stopped state, where
services need to be initiated and connected, leading to an noticeable pre-delay. Basically, each other
playback mode can be entered from paused state right away, and it is reasonable for the engine to
prepare for this through background pre rendering.
single stepping::
through user interaction, the current nominal playback position is moved to an adjacent frame,
while sending exactly this one frame to output as a still image. This is a critical operation for frame
accurate editing and should be reachable from any other playback mode with minimal latency. Lumiera treats
this mode of operation as an extension to _paused state_.
playback direction and speed::
while regular playback establishes a 1:1 relation between nominal time and wall clock time, a range of
other proportions need to be supported, like e.g. halved and doubled presentation speed both forwards and
backwards; it is conceivable to allow irregular fractions too. Support for speed adjustments in sound
playback is more involved as it might seem at first glance, so it is common to allow for a degraded
sound representation, or to fall back on fast-cueing.
fast cueing::
The purpose of cuing is to skip through a large amount of material to spot some specific parts.
For this to work, the presented material needs to be still recognisable in some way. Typically this is done
by presenting small continuous chunks of material interleaved with regular skips. For editing purposes, this
method is often perceived as lacking, especially by experienced editors. To the contrary, the former,
mechanical editing systems had the ability to run with actually increased frame rate, without skipping
any material.
+
* it is very common to employ speed acceleration in multiple steps to alleviate travelling to more distant
locations within the media
* to improve the editor's working experience, we might consider actually to raise the frame rate, given the
increased availability of high-framerate capable displays.
* another approach would be to apply some kind of granular synthesis, dissolving several consecutive segments
of material. The latter would prompt to include a specific buffering and processing device not present
in the render path for normal playback.
scrubbing::
this term relates to working with physical media, like magnetic tape or film strips the past: instead of using
the built-in transport device, the editor would spin the reels by hand in order to navigate to a desired position
intuitively. When translated to the world of digital media, the actual scrubbing facility is an interactive device,
typically even involving some kind of hardware control interface (``jog shuttle''). The playback engine needs to
support scrubbing, which translates into chasing a playback target, which is re-adjusted regularly. Again it
would be desirable to include some kind of acceleration strategy, especially to support a soft landing.

View file

@ -2060,7 +2060,7 @@ To support this usage pattern, the Fixture implementation makes use of the [[PIm
* moreover, this necessitates a tight integration down to implementation level, both with the clean-up and the render processes themselves
</pre>
</div>
<div title="FixtureStorage" modifier="Ichthyostega" modified="201301132208" created="201012140231" tags="Builder impl operational draft" changecount="44">
<div title="FixtureStorage" modifier="Ichthyostega" modified="201301152312" created="201012140231" tags="Builder impl operational draft" changecount="46">
<pre>The Fixture &amp;rarr; [[data structure|FixtureDatastructure]] acts as umbrella to hook up the elements of the render engine's processing nodes network (LowLevelModel).
Each segment within the [[Segmentation]] of any timeline serves as ''extent'' or unit of memory management: it is built up completely during the corresponding build process and becomes immutable thereafter, finally to be discarded as a whole when superseded by a modified version of that segment (new build process) -- but only after all related render processes (&amp;rarr; CalcStream) are known to be terminated.
@ -2115,7 +2115,7 @@ But actually I'm struck here, because of the yet limited knowledge about those r
If the latter question is answered with &quot;No!&quot;, then the problem gets simple in solution, but maybe memory consuming: In that case, //all//&amp;nbsp; processes linked to a timeline gets affected and thus tainted; we'd just dump them onto a pile and delay releasing all of the superseded segments until all of them are known to be terminated.
!!re-visited {{red{WIP 1/13}}}
the last conclusions drawn above where confirmed by the further development of the overall design. Yes, we do //supersede// frequently and liberally. This isn't much of a problem, since the preparation of new jobs, i.e. the [[frame dispatch step|Dispatcher]] is performed chunk wise. A //continuation job// is added at the end of each chunk, and this continuation will pick up the task of job planning in time.
the last conclusions drawn above where confirmed by the further development of the overall design. Yes, we do //supersede// frequently and liberally. This isn't much of a problem, since the preparation of new jobs, i.e. the [[frame dispatch step|FrameDispatcher]] is performed chunk wise. A //continuation job// is added at the end of each chunk, and this continuation will pick up the task of job planning in time.
At the 1/2013 developer's meeting, Cehteh and myself had a longer conversation regarding the topic of notifications and superseding of jobs within the scheduler. The conclusion was to give ''each play process a separate LUID'' and treat this as ''job group''. The scheduler interface will offer a call to supersede all jobs within a given group.
@ -2124,6 +2124,9 @@ Some questions remain though
* do they exist per CalcStream ?
* is it possible, to file these dedicated dispatch informations gradually?
* how to store and pass on the control information for NonLinearPlayback?
since the intention is to have dedicated dispatch tables, these would implement the {{{engine.Dispatcher}}} inteface and incorporate some kind of strategy corresponding to the mode of playback. The chunk wise continuation of the job planning process would have to be reformulated in terms of //real wall clock time rather// -- since the relation of the playback process to nominal time can't be assumed to be a simple linear progression in all cases.
</pre>
</div>
<div title="ForwardIterator" modifier="Ichthyostega" modified="200912190027" created="200910312114" tags="Concepts def spec" changecount="17">
@ -3331,7 +3334,7 @@ some points to note:
&amp;rarr; more fine grained [[implementation details|RenderImplDetails]]
</pre>
</div>
<div title="NonLinearPlayback" modifier="Ichthyostega" created="201301132217" tags="def Player Rendering draft" changecount="1">
<div title="NonLinearPlayback" modifier="Ichthyostega" modified="201301192102" created="201301132217" tags="def Player Rendering draft" changecount="26">
<pre>The calculations for rendering and playback are designed with a base case in mind: calculating a linear sequence of frames consecutive in time.
But there are several important modes of playback, which violate that assumption...
* jump-to / skip
@ -3342,11 +3345,78 @@ But there are several important modes of playback, which violate that assumption
** slow-motion
** fast-forward/backward shuffling
* scrubbing
* freewheeling (as fast as possible)
* free-wheeling (as fast as possible)
!search for a suitable implementation approach {{red{WIP 1/2013}}}
The crucial point seems to be when we're picking a starting point for the planning related to a new frame. &amp;rarr; {{{PlanningStepGenerator}}}
Closely related is the question when and how to terminate a planning chunk, what to dispose as a continuation, and when to cease planning altogether.
!requirement analysis
These non linear playback modes do pose some specific challenges on the overall control structure distributed over the various collaborators within the play and render subsystem.The following description treats each of the special modes within its relations to this engine context
;jumping
:creates a discontinuity in //nominal time,// while the progress of real wall clock time deadlines remains unaffected
:we need to distinguish two cases
:* a //pre planned jump// can be worked into the frame dispatch just like normal progression. It doesn't create any additional challenge on timely delivery
:* to the contrary, a //spontaneous re-adjustment of playback position// deprives the engine of any delivery headroom, forcing us to catch up anew.&lt;br/&gt;&amp;rarr; we should introduce a configurable slippage offset, to be added on the real time deadlines in this case, to give the engine some head start
:since each skip might create an output discontinuity, the de-clicking facility in the output sink should be notified explicitly (implying that we need an API, reachable from within the JobClosure)
;looping
:looped playback is implemented as repeated skip at the loop boundary; as such, this case always counts as pre planned jump.
;pausing
:paused playback represents a special state of the engine, where we expect playback to be able to commence right away, with minimal latency
:&amp;rarr;we need to take several coordinated measures to make this happen
:* when going to paused state, previously scheduled jobs aren't cancelled, rather rescheduled to background rendering, but in a way which effectively pins the first frames within cache
:* additionally, the OutputSlot needs to provide a special mode where output is //frozen// and any delivery is silently blocked. The reason is, we can't cancel already triggered output jobs
:* on return to normal playback, we need to ensure that the availability of cached results will reliably prevent superfluous prerequisite jobs from being scheduled at all &amp;rarr; we need conditional prerequisites
:The availability of such a pausing mechanism has several ramifications. We could conceive an auto-paused mode, switching to playback after sufficient background pre rendering to ensure the necessary scheduling headroom. Since the link to wall clock time and thus the real time deadlines are established the moment actual playback starts, we might even transition through auto paused mode whenever playback starts from scratch into some play mode.
;single stepping
:this can be seen as application of paused mode: we'd schedule a single play-out job, as if resuming from paused state, but we re-enter paused state immediately
;reversed play direction
:while basically trivial to implement, the challenge lies in possible naive implementation decisions assuming monotonic ascending frame times. Additionally, media decoders might need some hinting
:reversed (and speed adjusted) sound playback is surprisingly tricky -- even the most simplistic solution foces us to insert an effect processor into the calculation path.
;speed variations
:the relation between nominal time and real wall clock time needs to include a //speed factor.//
;fast cueing
:the purpose of cuing is to skip through a large amount of material to spot some specific parts. For this to work, the presented material needs to be still recognisable in some way. Typically this is done by presenting small continuous chunks of material interleaved with regular skips. For editing purposes, this method is often perceived as lacking, especially by experienced editors. The former, mechanical editing systems to the contrary had the ability to run with actually increased frame rate, without skipping any material.
:&amp;rarr; for one, this is a very specific configuration of the loop play mode.
:&amp;rarr; it is desirable to improve the editor's working experience here. We might consider actually increasing the frame rate, given the increased availability of high-framerate capable displays. Another approach would be to apply some kind of granular synthesis, dissolving several consecutive segments of material. The latter would prompt to include a specific buffering and processing device not present in the render path for normal playback. Since we do dedicated scheduling for each playback mode, we're indeed in a position to implement such facilities.
;scrubbing
:the actual scrubbing facility is an interactive device, typically even involving some kind of hardware control interface. But the engine needs to support scrubbing, which translates into chasing a playback target, which is re-adjusted regularly. The implementation facilities discussed thus far are sufficient to implement this feature, combined with the ability of life changes to the playback mode.
;free-wheeling
:at first sight, this feature is trivial to implement. All it takes is to ignore any real time scheduling targets, so it boils down to including a flag into the [[timing descriptor|Timings]]. But there is a catch. Since free-wheeling only makes sense for writing to a file like sink, actually the engine might be overrunning the consumer. In the end, we get to choose between the following alternatives: do we allow the output jobs to block in that case, or do we want to implement some kind of flow regulation?
!essential implementation level features
Drawing from this requirement analysis, we might identify some mandatory implementation elements necessary to incorporate these playback modes into the player and engine subsystem.
;for the __job planning stage__:
:we need a way to interact with a planning strategy, included when constituting the CalcStream
:* ability for planned discontinuities in the nominal time of the &quot;next frame&quot;
:* ability for placing such discontinuities repeatedly, as for looped playback
:* allow for interleaved skips and processed chunks, possibly with increased speed
:* ability to inject meta jobs under specific circumstances
:* placing callbacks into these meta jobs, e.g. for auto-paused mode
;for the __timings__:
:we need flexibility when establishing the deadlines
:* allow for an added offset when re-establishing the link between nominal and real time on replanning and superseeding of planned jobs
:* flexible link between nominal and real time, allowing vor reversed playback and changed speed
:* configurable slippage offset
;for the __play controller__:
:basically all changes regarding non linear playback modes are initiated and controlled here
:* a paused state
:* re-entering playback by callback
:* re-entering paused state by callback
:* a link to the existing feeds and calculation streams for superseding the current planning
:* use a strategy for fast-cueing (interleaved skips, increased speed, higher framerate, change model port to use a preconfigured granulator device)
;for the __scheduler interface__:
:we need some structural devices actually to implement those non-standard modes of operation
:* conditional prerequisites (prevent evaluation, re-evaluate later)
:* special &quot;as available&quot; delivery, both for free-wheeling and background
:* special way of cancelling jobs, which effectively re-schedules them into background.
:* a way for hinting the cache to store background frames with decreasing priority, thus ensuring the foremost availability of the first frames when picking up playback again
;for the __output sinks__:
:on the receiver side, we need some support to generate smooth and error free output delivery
:* automated dection of timing glitches, activating the discontinuity handling (&amp;raquo;de-click facility&amp;laquo;)
:* low-level API for signalling discontinuity to the OutputSlot. This information pertains to the currently delivered frame -- this is necessary when this frame //is actually being delivered timely.//
:* high-level API to switch any OutputSlot into &quot;frozen mode&quot;, disabling any further output, even in case of accidental delivery of further data by jobs currently in progression.
:* possibility to detect and signal overload of the receiver, either for blocking or for flow-control
</pre>
</div>
<div title="ObjectCreation" modifier="Ichthyostega" modified="201004031621" created="200709030139" tags="impl design" changecount="20">