rewrap text to 80 colummns (except for comments)

This commit is contained in:
Christian Thaeter 2010-06-04 19:30:51 +02:00 committed by Ichthyostega
parent ec4eabb95f
commit 12f6e48514

View file

@ -68,24 +68,26 @@ some explanation what it means to us:
without breaking compatibility. Projects you create nowadays with
Lumiera should be usable in foreseeable future, at least there needs
to be a guaranteed upgrade path.
<
Fundamental Forces
------------------
// the basic ideas which drive the lumiera design
The Lumiera design is guided by a small number of basic principles. Keeping these in
mind will help to understand how actually more interesting things can be built up
on that foundation.
The Lumiera design is guided by a small number of basic principles. Keeping
these in mind will help to understand how actually more interesting things can
be built up on that foundation.
Open ended combining of Building Blocks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lumiera is not so much defined in terms of _features_ -- rather it allows to combine
basic _building blocks._ These basic modules, entities or objects each have a distinct
_type_ explicitly limiting the connections. Within these limits, any conceivable
combination shall be supported without further hidden limitations.
Lumiera is not so much defined in terms of _features_ -- rather it allows to
combine basic _building blocks._ These basic modules, entities or objects each
have a distinct _type_ explicitly limiting the connections. Within these
limits, any conceivable combination shall be supported without further hidden
limitations.
Lumiera is neither a set of Lego bricks, nor is it the business application
driven by finite usage stories.
@ -93,16 +95,19 @@ driven by finite usage stories.
Medium level Abstraction and Project specific Conventions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
These building blocks within Lumiera create a moderate level of abstraction; a user
may, if desired, directly manipulate through the GUI clips, individual effects, masks,
and even the placements xref:placement[->] used to stitch the objects together, which is
comparatively low-level. On the other hand, these abstractions shield the user from the
actual technical details like format conversions and the accessing of individual channels.
To complement this approach, Lumiera does _not_ rely on hard wired, global conventions --
rather we allow to build up project specific conventions and rules xref:rules[->]
to fit the given requirements and preferred working style. To help getting started,
Lumiera will ship with a fairly conventional project template and default configuration.
These building blocks within Lumiera create a moderate level of abstraction; a
user may, if desired, directly manipulate through the GUI clips, individual
effects, masks, and even the placements xref:placement[->] used to stitch the
objects together, which is comparatively low-level. On the other hand, these
abstractions shield the user from the actual technical details like format
conversions and the accessing of individual channels.
To complement this approach, Lumiera does _not_ rely on hard wired, global
conventions -- rather we allow to build up project specific conventions and
rules xref:rules[->] to fit the given requirements and preferred working
style. To help getting started, Lumiera will ship with a fairly conventional
project template and default configuration.
[[graphs]]
@ -110,8 +115,8 @@ Rendering is Graph Processing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Processing of Video (and audio) data can be generalized as graph processing
(more precisely ``directed acyclic graphs''). Data flows on the
edges of these graphs and is processed in the nodes.
(more precisely ``directed acyclic graphs''). Data flows on the edges of these
graphs and is processed in the nodes.
image:graph.svg[Example for a graph]
@ -121,8 +126,9 @@ and will be implemented by plugins xref:plugins[->]. Moreover one can
preconfigure subgraphs and handle them as single entity xref:pluginstack[->].
In Lumiera everything will be translated into such a graph. Your footage will
be demultiplexed xref:demultiplexer[->] at a first node, down to the encoding xref:encoder[->]
and multiplexer xref:multiplexer[->] which assembles the final video.
be demultiplexed xref:demultiplexer[->] at a first node, down to the encoding
xref:encoder[->] and multiplexer xref:multiplexer[->] which assembles the
final video.
Pulling not Pushing
@ -164,9 +170,9 @@ xref:frameserver[->]. Completely script driven interfaces for automated
processing are also planned.
The GUI screenshot you see above is faily default as when you start Lumiera up
for the first time (the plan is to add a 2nd Viewer to the default configuration).
While we support a much more sophisticated screen concept xref:screenconcept[->]
to adapt to different workplaces and workflows.
for the first time (the plan is to add a 2nd Viewer to the default
configuration). While we support a much more sophisticated screen concept
xref:screenconcept[->] to adapt to different workplaces and workflows.
Viewer
@ -289,18 +295,22 @@ Session storage
// everything is stored in the session
[[placement]]
Placements
~~~~~~~~~~
[[placement]]
Generic mechanism to stitch together media objects. Any placement may contain a list of conditions
how to locate the placed object, examples being time-absolute/relative, relative to another object,
or relative to some specific source media frame.
All of the session model contents are attached by placement, forming a large tree. Placements are
to be _resolved_ to find out the actual position, output and further locational properties of an object.
Missing placement information is _inherited_ from parent placements in the session tree. This causes
a lot of relational and locational properties to be inherited from more global settings, unless defined
locally at a given object: time reference point, output destination, layering, fade control, audio pan,...
Generic mechanism to stitch together media objects. Any placement may contain
a list of conditions how to locate the placed object, examples being
time-absolute/relative, relative to another object, or relative to some
specific source media frame.
All of the session model contents are attached by placement, forming a large
tree. Placements are to be _resolved_ to find out the actual position, output
and further locational properties of an object. Missing placement information
is _inherited_ from parent placements in the session tree. This causes a lot
of relational and locational properties to be inherited from more global
settings, unless defined locally at a given object: time reference point,
output destination, layering, fade control, audio pan,...
Rendering Engine
@ -434,13 +444,16 @@ Glossary
Track-head/patchbay::
TODO: better term for this
//Note by Ichthyo: while I like the term "patchbay", my concern with this is that
// it has already a very specific meaning in audio applications; and while our track heads
// certainly can serve as a patchbay, that is not the main purpose and they can do things
// beyond that..
the box in front of a track allowing to control properties of the elements
contained within this track, unfold nested tracks and so on. To a large extent,
it corresponds to the placement of this track and allows to manipulate this placement
// beyond that..
the box in front of a track allowing to control properties of the
elements contained within this track, unfold nested tracks and so on.
To a large extent, it corresponds to the placement of this track and
allows to manipulate this placement
Timeline::
@ -452,18 +465,20 @@ Glossary
is comprised of:
* Time axis, defining the time base
* Play Controller (WIP: discussion if thats belongs to the timeline
and if we want a 1:N relation here). Note by Ichthyo: yes, our current discussion showed us
that a play controller rather gets allocated to a timeline, but isn't contained therein.
and if we want a 1:N relation here). Note by Ichthyo: yes, our
current discussion showed us that a play controller rather gets
allocated to a timeline, but isn't contained therein.
* global pipes, i.e. global busses like in a mixing desk
* exactly one top level Sequence
Time Axis::
An entity defining the temporal properties of a timeline. A time axis defines
the time base, kind of timecode and absolute anchor point. Besides, it manages
a set of frame quantisation grids, corresponding to the outputs configured for
this timeline (through the global busses).
The GUI representation is a time ruler with configurable time ticks showed
on top of the timeline view
An entity defining the temporal properties of a timeline. A time axis
defines the time base, kind of timecode and absolute anchor point.
Besides, it manages a set of frame quantisation grids, corresponding
to the outputs configured for this timeline (through the global
busses). The GUI representation is a time ruler with configurable time
ticks showed on top of the timeline view
Busses::
@ -473,42 +488,47 @@ Glossary
Busses are part of a Timeline.
Sequence::
A collection of *Media Objects* (clips, effects, transitions, labels, automation)
placed onto a tree of tracks. By means of this placement, the objects could be
anchored relative to each other, relative to external objects, absolute in time.
A sequence can connect to the global pipes when used as top-level sequence within
a timeline, or alternatively it can act as a virtual-media when used within a
A collection of *Media Objects* (clips, effects, transitions, labels,
automation) placed onto a tree of tracks. By means of this placement,
the objects could be anchored relative to each other, relative to
external objects, absolute in time. A sequence can connect to the
global pipes when used as top-level sequence within a timeline, or
alternatively it can act as a virtual-media when used within a
meta-clip (nested sequence). In the default configuration, a Sequence
contains just a single root track and sends directly to the master
bus of the timeline.
Placement::
A Placement represents a relation: it is always linked to a Subject (this being a Media Object)
and has the meaning to place this Subject in some manner, either relatively to other Media Objects,
by some Constraint or simply absolute at (time, output). Placements are used to stitch together
the objects in the high-level-model. Placements thus are organised hierarchically and need
to be _resolved_ to obtain a specific value (time point, output routing, layering, fade,...)
Pipe::
Conceptual building block of the high-level model. It can be thought off as
simple linear processing chain. A stream can be 'sent to' a pipe, in which case
it will be mixed in at the input, and you can 'plug' the output of a pipe to
another destination. Further, effects or processors can be attached to the pipe.
Besides the global pipes (busses) in each Timeline, each clip automatically creates N pipes
(one for each distinct content stream. Typically N=2, for video and audio)
PlayController::
coordinating playback, cueing and rewinding of a playback position, visible
as 'Playhead' cursor in the GUI. When in play state, a PlayController requests
and directs a render process to deliver the media data needed for playback.
contains just a single root track and sends directly to the master bus
of the timeline.
Placement::
A Placement represents a relation: it is always linked to a Subject
(this being a Media Object) and has the meaning to place this Subject
in some manner, either relatively to other Media Objects, by some
Constraint or simply absolute at (time, output). Placements are used
to stitch together the objects in the high-level-model. Placements
thus are organised hierarchically and need to be _resolved_ to obtain
a specific value (time point, output routing, layering, fade,...)
Pipe::
Conceptual building block of the high-level model. It can be thought
off as simple linear processing chain. A stream can be 'sent to' a
pipe, in which case it will be mixed in at the input, and you can
'plug' the output of a pipe to another destination. Further, effects
or processors can be attached to the pipe. Besides the global pipes
(busses) in each Timeline, each clip automatically creates N pipes
(one for each distinct content stream. Typically N=2, for video and
audio)
PlayController::
coordinating playback, cueing and rewinding of a playback position,
visible as 'Playhead' cursor in the GUI. When in play state, a
PlayController requests and directs a render process to deliver the
media data needed for playback.
//TODO not sure about the term and if it's appropriate to include it here
//TODO not sure about the term and if it's appropriate to include it here
RenderTask::
basically a PlayController, but collecting output directly, without moving a
PlayheadCursor (maybe a progress indicator) and not operating in a
timed fashion, but freewheeling or in background mode
basically a PlayController, but collecting output directly, without
moving a PlayheadCursor (maybe a progress indicator) and not operating
in a timed fashion, but freewheeling or in background mode
Controller Gui::
This can be either a full Software implementation for a Transport
@ -517,29 +537,36 @@ Glossary
gui-entities (Viewers, Timeline Views)
Viewer::
the display destination showing video frame and possibly some effect overlays (masking etc.).
When attached to a timeline, a viewer reflects the state of the timeline's associated
PlayController, and it attaches to the timeline's global pipes (stream-type match or explicitly),
showing video as monitor image and sending audio to the system audio port. Possible extensions are
for a viewer to be able to attach to probe points within the render network, to show a second stream
as (partial) overlay for comparison, or to be collapsed to a mere control for sending video to a
dedicated monitor (separate X display or firewire)
the display destination showing video frame and possibly some effect
overlays (masking etc.). When attached to a timeline, a viewer
reflects the state of the timeline's associated PlayController, and it
attaches to the timeline's global pipes (stream-type match or
explicitly), showing video as monitor image and sending audio to the
system audio port. Possible extensions are for a viewer to be able to
attach to probe points within the render network, to show a second
stream as (partial) overlay for comparison, or to be collapsed to a
mere control for sending video to a dedicated monitor (separate X
display or firewire)
High Level Model::
All the session content to be edited and manipulated by the user through the GUI.
The high-level-model will be translated by the Builder into the Low Level Model for rendering.
All the session content to be edited and manipulated by the user
through the GUI. The high-level-model will be translated by the
Builder into the Low Level Model for rendering.
Low Level Model::
The generated Processing Graph, to be ``performed'' within the engine to yield rendered output
The generated Processing Graph, to be ``performed'' within the engine
to yield rendered output
Builder::
A kind of compiler which creates Low Level/Processing Graphs, by traversing and evaluating
the relevant parts of the high-level-model and using the Rules System.
A kind of compiler which creates Low Level/Processing Graphs, by
traversing and evaluating the relevant parts of the high-level-model
and using the Rules System.
Timeline Segment::
A range in the timeline which yields in one Processing graph, commonly
the range between cut points (which require a reconfiguration of the graph).
the range between cut points (which require a reconfiguration of the
graph).
// Note by Ichthyo: "Extent" sounds somewhat cool, just it didn't occur to me as a term.
// We may well agree on it, if "extent" communicates the meaning better. Up to now, I called it "segment"