re-read some of the RfC, fix small markup issues

This commit is contained in:
Fischlurch 2011-04-16 02:47:42 +02:00
parent 207fa7c13c
commit eb087b98b1
7 changed files with 121 additions and 95 deletions

View file

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 25 KiB

View file

@ -52,30 +52,14 @@ Tasks
* footers (or headers) to configure common editors to use this style by default
Pros
^^^^
Cons
^^^^
Alternatives
^^^^^^^^^^^^
Rationale
~~~~~~~~~
Conclusion
----------
we agreed on GNU style
-- link:ct[] [[DateTime(2007-07-03T04:04:22Z)]]
CT:: '2007-07-03 04:04'
@ -88,6 +72,8 @@ think all other reasons will lead us to nowhere!
Although I'm used to a BSD/KNF-like coding style I will try the GNU one. After
all, the wikipedia page mentions no disadvantages of that style :)
MichaelPloujnikov:: '2007-06-27 17:17'
I just proposed K&R because it is widely accepted. Personally, I was never very
fond of K&R style, I always prefered putting opening braces to the left. I
@ -96,5 +82,8 @@ ECLIPSE comes with presets for all this styles :-P ). Anyhow, I can adapt to
most any style. The only thing I really dislike is using tabs (with the
exeption of database DDLs and CSound files, where tab are actually helpful) :)
Ichthyo:: '2007-06-27 20:55'
''''
Back to link:/documentation/devel/rfc.html[Lumiera Design Process overview]

View file

@ -94,7 +94,8 @@ relies on that interface for discovering session contents. Besides that, we
need more implementation experience.
Some existing iterators or collection-style interfaces should be retro-fitted.
See http://issues.lumiera.org/ticket/349[Ticket #349]. + Moreover, we need to
See http://issues.lumiera.org/ticket/349[Ticket #349]. +
Moreover, we need to
gain experience about mapping this concept down into a flat C-style API.
@ -133,6 +134,23 @@ compromising the clean APIs.
Comments
--------
//comments: append below
Now in use since more then a year, without turning up any serious problems.
The only _minor concern_ I can see is that this concept, as such, doesn't solve
the problem with exposing implementation details of the underlying container on the API.
Similar to STL Iterators, the actual implementation representation is only disguised
behind a 'typedef'. But, generally speaking, this is an inevitable consequence of the
``zero overhead'' abstraction. For the cases when an indirection (via VTable) is feasible,
I've created the 'IterSource' template, which sticks to this Lumiera Forward Iterator
concept, but provides an opaque frontend, allowing to decouple completely from the
actual implementation. Besides that, over time I've written several standard adapters
for the most common STL containers, plus Map, key and value extractors.
Ichthyostega:: 'Sa 16 Apr 2011 00:20:13 CEST'
//endof_comments:
Final

View file

@ -6,51 +6,58 @@
-------------------------------------
.Summary
****************************************************************************
The Session or Project contains multiple top-level *Timeline* elements.
These provide an (output)configuration and global busses, while the actual
content _and the tree of tracks_ is contained in *Sequences*.
These can also be used as *meta clips* and thus nested arbitrarily.
****************************************************************************
Relation of Project, Timeline(s), Sequence(s) and Output generation
-------------------------------------------------------------------
In the course of our discussions it meanwhile became clear, that Lumiera will
show multiple ''timeline-like'' views within one project. Similarly it's clear
now that we will support [wiki:self:../EDLsAreMetaClips nested Sequences as
meta-clips]. The purpose of this entry is to try to settle on some definitions
show multiple _timeline-like_ views within one project. Similarly it's clear
now that we will support link:EDLsAreMetaClips.html[nested Sequences as meta-clips].
The purpose of this entry is to try to settle on some definitions
and clarify the relationships between these concepts.
Definitions
~~~~~~~~~~~
Project:: the top-level context in which all edit work is done over an
Project:: the top-level context in which all edit work is done over an
extended period of time. The Project can be saved and re-opened. It is
comprised of the collection of all things the user is working on, it contains
all informations, assets, state and objects to be edited.
Session:: the current in-memory representation of the Project when opened
Session:: the current in-memory representation of the Project when opened
within an instance of Lumiera. This is an implementation-internal term. For
the GUI and the users POV we should always prefer the term "Project" for the
general concept.
Timeline:: the top level element within the Project. It is visible within a
''timeline view'' in the GUI and represents the effective (resulting)
Timeline:: the top level element within the Project. It is visible within a
_timeline view_ in the GUI and represents the effective (resulting)
arrangement of media objects, resolved to a finite time axis, to be rendered
for output or viewed in a Monitor (viewer window). Timeline(s) are top-level
and may not be further combined. A timeline is comprised of:
* a time axis in abolute time (WIP: not clear if this is an entity or just a
conceptual definition)
* a ''!PlayController''
* a list of global ''Pipes'' representing the possible outputs (master
* a _PlayController_
* a list of global _Pipes_ representing the possible outputs (master
busses)
* exactly one top-level ''Sequence,'' which in turn may contain further
* exactly one top-level _Sequence_, which in turn may contain further
nested Sequences
Timeline View:: a view in the GUI featuring a given timeline. There might be
Timeline View:: a view in the GUI featuring a given timeline. There might be
multiple views of the same timeline, all sharing the same !PlayController. A
proposed extension is the ability to ''focus'' a timeline view to a
proposed extension is the ability to _focus_ a timeline view to a
sub-Sequence contained within the top-level sequence of the underlying
Timeline. (Intended for editing meta-clips)
Sequence:: A collection of ''MObjects'' placed onto a tree of tracks. (this
entity was former named ''EDL'' -- an alternative name would be
''Arrangement'' ). By means of this placement, the objects could be anchored
Sequence:: A collection of _MObjects_ placed onto a tree of tracks. (this
entity was former named _EDL_ -- an alternative name would be
_Arrangement_ ). By means of this placement, the objects could be anchored
relative to each other, relative to external objects, absolute in time.
Placement and routing information can be inherited down the track tree, and
missing information is filled in by configuration rules. This way, a sequence
@ -60,26 +67,26 @@ Definitions
just a single root track and sends directly to the master busses of the
timeline.
Pipe:: the conceptual building block of the high-level model. It can be
thought of as simple linear processing chain. A stream can be ''sent to'' a
pipe, in which case it will be mixed in at the input, and you can ''plug'' the
Pipe:: the conceptual building block of the high-level model. It can be
thought of as simple linear processing chain. A stream can be _sent to_ a
pipe, in which case it will be mixed in at the input, and you can _plug_ the
output of a pipe to another destination. Further, effects or processors can be
attached to the pipe. Besides the global pipes (busses) in each Timeline, each
clip automatically creates N pipes (one for each distinct content stream, i.e.
normally N=2, namely video and audio)
link:PlayController[]:: coordinating playback, cueing and rewinding of a
''!PlayheadCursor'' (or multiple in case there are multiple views and or
PlayController:: coordinating playback, cueing and rewinding of a
_PlayheadCursor_ (or multiple in case there are multiple views and or
monitors), and at the same time directing a render process to deliver the
media data needed for playback. Actually, the implementation of the
!PlayController(s) is assumed to live in the backend.
PlayController(s) is assumed to live in the backend.
link:RenderTask[]:: basically a !PlayController, but collecting output
RenderTask:: basically a !PlayController, but collecting output
directly, without moving a !PlayheadCursor (maybe a progress indicator) and
not operating in a timed fashion, but freewheeling or in background mode
Monitor:: a viewer window to be attached to a timeline. When attached, a
monitor reflects the state of the timeline's !PlayController, and it attaches
Monitor/Viewer:: a viewer window to be attached to a timeline. When attached, a
monitor reflects the state of the timeline's PlayController, and it attaches
to the timeline's global pipes by stream-type match, showing video as monitor
image and sending audio to the system audio port (Alsa or Jack). Possible
extensions are for a monitor to be able to attach to probe points within the
@ -91,7 +98,7 @@ Definitions
Relations
~~~~~~~~~
image:images/fig132741.png[Relation of Project Timeline Sequence Output]
image:{imgd}/ProjectTimelineSequenceUML.png[UML: Relation of Project, Timeline, Sequence and Output]
@ -99,7 +106,7 @@ image:images/fig132741.png[Relation of Project Timeline Sequence Output]
* this UML shows the relation of concepts, not so much their implementation
* within one Project, we may have multiple independent timelines and at the
same time we may have multiple views of the same timeline.
* all playhead displays within different views linked to the ''same''
* all playhead displays within different views linked to the _same_
underlying timeline are effectively linked together, as are all GUI widgets
representing the same !PlayController owned by a single timeline.
* I am proposing to do it this way per default, because it seems to be a best
@ -108,7 +115,7 @@ image:images/fig132741.png[Relation of Project Timeline Sequence Output]
* the timeline view is modeled to be a sub-concept of "timeline" and thus can
stand-in. Thus, to start with, for the GUI it doesn't make any difference if
it talks to a timeline view or a timeline.
* each timeline ''refers'' to a (top-level) sequence. I.e. the sequences
* each timeline _refers_ to a (top-level) sequence. I.e. the sequences
themselves are owned by the project, and theoretically it's possible to refer
to the same sequence from multiple timelines directly and indirectly.
* besides, it's also possible to create multiple independent timelines -— in
@ -116,11 +123,11 @@ image:images/fig132741.png[Relation of Project Timeline Sequence Output]
configuration gives the ability to play the same arrangement in parallel with
multiple independent play controllers (and thus independent playhead
positions)
* to complement this possibilities, I'd propose to give the ''timeline view''
* to complement this possibilities, I'd propose to give the _timeline view_
the possibility to be focussed (re-linked) to a sub-sequence. This way, it
would stay connected to the main play control, but at the same time show a
sub-sequence ''in the way it will be treated as embedded within the top-level
sequence.'' This would be the default operation mode when a meta-clip is
sub-sequence _in the way it will be treated as embedded within the top-level
sequence._ This would be the default operation mode when a meta-clip is
opened (and showed in a separate tab with such a linked timeline view). The
reason for this proposed handling is again to give the user the least
surprising behaviour. Because, when -— on the contrary -— the
@ -129,8 +136,8 @@ image:images/fig132741.png[Relation of Project Timeline Sequence Output]
reserved for advanced use, e.g. when multiple editors cooperate on a single
project and a sequence has to be prepared in isolation prior to being
integrated in the global sequence (featuring the whole movie).
* one rather unconventional feature to be noted is that the ''tracks'' are
within the ''sequences'' and not on the level of the global busses as in most
* one rather unconventional feature to be noted is that the _tracks_ are
within the _sequences_ and not on the level of the global busses as in most
other video and audio editors. The rationale is that this allows for fully
exploiting the tree-structure, even when working with large and compound
projects, it allows for sequences being local clusters of objects including
@ -141,10 +148,10 @@ image:images/fig132741.png[Relation of Project Timeline Sequence Output]
Tasks
^^^^^
* Interfaces on the link:GUI/Proc[] level need to be fully specified.
* Interfaces on the GUI and Proc level need to be fully specified.
Especially, "Timeline" is now promoted to be a new top-level entity within
the Session
* communication between the !PlayController(s) and the GUI need to be worked
* communication between the PlayController(s) and the GUI need to be worked
out
* the stream type system, which is needed to make this default connection
scheme work, currently is just planned and drafted. Doing a exemplaric

View file

@ -153,7 +153,8 @@ Comments
//comments: append below
.State -> Final
common practice
considered common practice
Do 14 Apr 2011 03:46:07 CEST Christian Thaeter <ct@pipapo.org>

View file

@ -13,16 +13,16 @@ The only way to defeat "featuritis" is to build upon a coherent design --
+
which in turn relies upon a more or less explicit understanding what the
application should be like, and the way the prospective user is thought to work
with the program. Today, a generally accepted ''method'' for building up such
an understanding is to do a ''use case analysis.'' Such a formal analysis would
with the program. Today, a generally accepted 'method' for building up such
an understanding is to do a *use case analysis*. Such a formal analysis would
require to identify all usage scenarios with the involved actors and parts of
the system, and then to refine them in detail and break them down into distinct
use cases. Here, I'll try a rather informal variant of such an analysis. I'll
restrain myself to describing the most important usage situations.
''please participate in the discussion. It well may be that everything detailed
here is self-evident, but I doubt so. At least the grouping and the omissions
reflect sort-of a focus of the project''
'please participate in the discussion. It well may be that everything detailed
here is self-evident, but I doubt so. At least the grouping and the omissions
kind-of reflect a certain focus of the project'
Describing basic Lumiera usage situations
@ -57,7 +57,7 @@ You build up a simple linear cut sequence. Either by
of) unwanted parts
- playing source media and spilling over (insert, overwrite) some parts into
the final assembly
- dragging over pre-organised clips from clip folders to build up the
- dragging over the pre-organised clips from clip folders to build up the
assembly.
Sound is either used immediately as-is (the soundtrack attached to the media),
@ -71,16 +71,16 @@ Scenario (3) : Augmenting an assembly
Without the intention to rework it from scratch, an already existing simple
assembly is augmented, beautified and polished, maybe to conform with
professional standards. This includes the "rescue" of a somewhat questionable
professional standards. This includes the ``rescue'' of a somewhat questionable
assembly by repairing localized technical problems, but also shortening and
re-arranging, and in extreme cases even changing the narrative structure. A
distinctive property of this usage scenario is that work happens rather in the
context of ''tasks'' (passes) -- not so much isolated operations:
context of 'tasks' (passes) -- not so much isolated operations:
- the task may be to get the rhythm or overall tempo right, and thus you go
over the sequence and do trim, roll, shuffle or slide edits.
- you may want to "fold-out" parts of the sound, thus interweaving o-sound and
music
- you may want to ``fold-out'' parts of the sound, thus interweaving o-sound
and music
- there may be a sound overdubbing and replacing pass
- you may want to walk certain automation curves and adjust levels (sound
volume or tone, fade, brightness/contrast/colour)
@ -91,7 +91,7 @@ context of ''tasks'' (passes) -- not so much isolated operations:
Scenario (4) : Compositional work
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Here I define ''compositional work'' as a situation where you deal with
Here I define *compositional work* as a situation where you deal with
multiple more or less independent sequences going on in parallel, similar to a
musical score. Frequently, we encounter compositional parts embedded in a
otherwise linear work, and often those parts evolve when Scenario (3) is driven
@ -102,29 +102,29 @@ to the extreme.
- a movie with a complex narrative structure may induce compositional work on
a very large scale (and existing applications frequently fall short on
supporting such)
- compositing often leads to compositional work. Special FX, masked objects
- _compositing_ often leads to compositional work. Special FX, masked objects
being arranged, artificial elements to be integrated.
- similarly any collage like or heavily layered arrangements lead themselves
- similarly any collage-like or heavily layered arrangements lead themselves
to requiring compositional work.
The common distinctive property of all those situations is: objects are
embedded into a primary context and have to obey the rules of this context, and
at the same time have a close correlation to other objects which are embedded
in a completely different ("orthogonal") context. (To give a catchy example:
in a completely different (``orthogonal'') context. (To give a catchy example:
assume, a CG monster has to be integrated. Besides the masked monster object,
you have several colouring and blurring layers at completely different levels
in the layering order, and at the same time you have correlated sound objects,
which need to be integrated into the general soundscape. And now your primary
which need to be integrated into the general sound-scape. And now your primary
job is to get the movement and timings of the monster right in relation to the
primary timing grid established by the existing edit)
The working style and thus the tool support necessary for compositional work is
completely different to Scenario (3). After an innitial buildup (which often is
completely different to Scenario (3). After an initial build-up (which often is
very systematic), the working profile can be characterized by tweaks to various
parameters to be done in-sync at widely separated sites within the session,
together with repeated cycles of "do it", "assess the result", "undo all and do
some small detail differently". Typically there is the need for much navigation
(contrast this to Scenario (3) where you work in "passes")
together with repeated cycles of ``do it'', ``assess the result'', ``undo all and
do some small detail differently''. Typically there is the need for much navigation
(contrast this to Scenario (3) where you work in _tasks_ or _passes_)
Scenario (5) : Working with Sound
@ -150,7 +150,7 @@ operations:
While clearly some of those tasks are always better done within a dedicated
application, the ability to carry out this work partially within the main
session and even while the basic edit is still in flux -- may open new artistic
possiblilities.
possibilities.
Scenario (6) : Large Projects
@ -158,7 +158,7 @@ Scenario (6) : Large Projects
At first sight, the operations and the work to be done in large projects is the
same as in small ones. But large projects tend to create sort of an additional
"layer" on top of the usage scenarios described thus far, which will "kick in"
``layer'' on top of the usage scenarios described thus far, which will ``kick in''
at various places.
- work may be divided upon several editors, working on separate parts
@ -169,8 +169,9 @@ at various places.
a certain transition (template), the way fade-outs are done, a certain
colour profile. Possibly, this stuff needs to be adjusted all over the
project.
- there will be a general (large scale) timing grid and probably there is the
need to navigate to the different parts of the whole project.
- there will be a general (large scale) timing grid with distinct ``check points''
and probably there is the need to navigate to the different parts of the
whole project.
- there may be the necessity to build several versions of the same project in
parallel (e.g. a short version and a extended director's cut)
- you may have to care for such nasty and tedious things as keeping sub-titles
@ -187,7 +188,7 @@ Several people work on a project.
- A longer sequence might be split up into parts, each one edited by another
person. The parts will be collected and assembled by the chief editor. Edits
to the parts will still be possible, but a system of permissions allows to
lock down access to the material.
lock down access to parts of the edit, so to prevent unexpected interferences.
- Arrangements based on the same resources can be branched, tagged and merged.
- Edits are logged with usernames
- Markers can be shown/hidden on a per creator base.
@ -201,12 +202,12 @@ Several people work on a project.
Scenario (8) : Script driven
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The application is started ''headless'' (without GUI) and controlled via an
The application is started ``headless'' (without GUI) and controlled via an
API. Either an existing session is loaded, or a new session is created and
populated. Then, some operations have to be done in a systematic manner,
requiring a way to address parts of the session both unambiguously and in a way
easy to access and control from a programming environment (you can't just
''see'' the right clip, it needs to be tagged). Finally, there might be an
``see'' the right clip, it needs to be tagged). Finally, there might be an
export or render step. A variation of this scenario is the automatic extraction
of some informations from an existing project.
@ -218,26 +219,26 @@ Discussion
* describing such scenarios, even if hypothetical, create an anchor or point of
referral for feature/GUI design work to be done in detail
* relating features to working situations helps to see what is really important
and what is rather of technical merrit
and what is rather of technical merit
* compiling and discussing this list helps shaping the character of the
application as a whole
* the above compilation relates individual features to a general production
process.
* the goal of this compilation is to be ''fairly complete''
* the goal of this compilation is to be _fairly complete_
.Cons
* any of those descriptions is artificial
* sometimes it is better to develop an application technology driven,
especially when it is technologically challenging to get it to work properly.
* having such a large-scale vision may frighten people away which otherwise
* having such a large-scale vision may freak away people which otherwise
might jump in and implement some crazy but valuable new feature
* the listed usage scenarios intend to be ''fairly complete,'' which can be a
* the listed usage scenarios intend to be _fairly complete_, which can be a
limitation or even self-deception. Better have an open ended list.
* the above compilation seems quite conventional and explicitly leaves out some
scenarios
- networked, distributed scenarios, compound applications
- television, life video, !VeeJay-ing
- television, life video, VeeJay-ing
- cartoons, animations, game design
@ -247,11 +248,11 @@ Discussion
* just start out with one scenario directly at hand (e.g. the simple assembly)
and not worrying about the rest
* rather then defining those scenarios (which are necessarily hypothetical),
rather stick to the operation level. E.g. a use case would be rather "trim a
clip"
rather stick to the operation level. E.g. a use case would be rather
on the level of ``triming a clip''
* doing a complete state-of-the art UML use case analysis.
* after having created the foundation, rather stick to an XP approach, i.e.
implement, integrate and release small "usage stories"
implement, integrate and release small ``usage stories''
@ -281,15 +282,19 @@ circumstances of production change quite dramatically.
Comments
--------
//comments: append below
Template e.g. for regular TV series
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.Template e.g. for regular TV series
Constraints to fit all contents within fixed timeline, cover topic, select
collage of iconic scenes from archived and collected footage. Update intro and
credit roll for each episode. Add in stopmotion, and 3D model animations with
vocal commentaries. Gather together separate items from "outworkers". Tree
(@)SIG(@)
vocal commentaries. Gather together separate items from "outworkers".
Tree:: '2008-12-27 08:36:36'
//endof_comments:
Parked

View file

@ -6,8 +6,14 @@ Parked Design Proposals
-> read link:../rfc.html[more about Lumiera RfC] and the Design Process
The RfC entries listed here where proposed but not finally decided or agreed on,
The RfC entries listed here where proposed but not finally decided or agreed on;
moreover parked RfC's are put here until someone has time to review and improve
them further towards an final decision.
Labelling a RfC as _parked_ doesn't entail any pre-judgment -- it just might be
beyond our current working focus, or we're lacking the resources to care for this
topic right now.