Rewrap all RFC's

This reverts commit 65bae31de4103abb7d7b6fd004a8315973d3144a.
and reprocessed the wrapping.

Note that the automatic wrapping is not perfect, some manual fixing
by removing some hunks was required.
This commit is contained in:
Christian Thaeter 2010-07-26 02:54:14 +02:00
parent ffc4e0023c
commit 091785c2d4
39 changed files with 590 additions and 703 deletions

View file

@ -60,11 +60,10 @@ Comments
in your example, the user would just use the "default placement". My
Intention was to use '''tags''' to quite some extent. The user would be able
to tag the source footage, and then rules can kick in when a certain tag
applies.
Incidentally, integrating prolog is immediately on the agenda, because first
we want to flesh out the very basic system and get to work basic rendering.
Until then, I use a "mock" implementation of the query/rules system, which
just returns some hard wired defaults.
applies. Incidentally, integrating prolog is immediately on the agenda,
because first we want to flesh out the very basic system and get to work
basic rendering. Until then, I use a "mock" implementation of the
query/rules system, which just returns some hard wired defaults.
-- link:Ichthyostega[] [[DateTime(2008-09-04T15:38:21Z)]]

View file

@ -156,9 +156,9 @@ Comments
--------
I strongly object promoting such a thing as a general "Style Guide". It can be
a help or last resort if you are forced to work with improper
tools (a situation that's rather frequent in practice though). __As such it is
well chosen and practical__.
a help or last resort if you are forced to work with improper tools (a
situation that's rather frequent in practice though). __As such it is well
chosen and practical__.
But basically, it shows several things:
* you are using a global namespace
@ -169,8 +169,7 @@ All of this indicates some design style breakage, so it would be preferable to
fix the design if possible.
The only part I'd like to support as a Style Guide is the rule of using the
"verb+object" pattern for
creating function names
"verb+object" pattern for creating function names
-- link:Ichthyostega[] [[DateTime(2007-07-08T11:42:39Z)]]
Probably needs little explanation:
@ -207,11 +206,9 @@ naming can be much simpler, see examples there or in my repository.
Thanks, your explanation together with the example in git made the usage
pattern much more clear. I think the _version postfix is esp. helpful on the
names
of the plugin interfaces (structs in C), and probably it will be a good
names of the plugin interfaces (structs in C), and probably it will be a good
practice, to have one such common plugin interface on every "plugin extension
point",
i.e. every point in the sytem, that can be extended by plugins.
point", i.e. every point in the sytem, that can be extended by plugins.
-- 217.110.94.1 [[DateTime(2007-07-10T17:23:33Z)]]
''''

View file

@ -90,12 +90,11 @@ all, the wikipedia page mentions no disadvantages of that style :)
I just proposed K&R because it is widely accepted. Personally, I was never very
fond of K&R style, I always prefered putting opening braces
to the left. I never used GNU style until now, but it looks somewhat apealing
to me. (btw, ECLIPSE comes with presets for all this styles :-P ).
Anyhow, I can adapt to most any style. The only thing I really dislike is using
tabs (with the exeption of database DDLs and CSound files, where
tab are actually helpful) :)
fond of K&R style, I always prefered putting opening braces to the left. I
never used GNU style until now, but it looks somewhat apealing to me. (btw,
ECLIPSE comes with presets for all this styles :-P ). Anyhow, I can adapt to
most any style. The only thing I really dislike is using tabs (with the
exeption of database DDLs and CSound files, where tab are actually helpful) :)
''''
Back to link:Lumiera/DesignProcess[]

View file

@ -39,10 +39,7 @@ This just starts as braindump, I will refine it soon:
.Notes:
* ichthyo wrote also some ideas on
http://www.pipapo.org/pipawiki/Cinelerra/Developers/ichthyo/Cinelerra3/Archite
ture[Architecture] and a sketch/draft about
http://www.pipapo.org/pipawiki/Cinelerra/Developers/ichthyo/Possibilities_at_h
nd[things possible in the middle layer]
http://www.pipapo.org/pipawiki/Cinelerra/Developers/ichthyo/Cinelerra3/Architecture[Architecture] and a sketch/draft about http://www.pipapo.org/pipawiki/Cinelerra/Developers/ichthyo/Possibilities_at_hand[things possible in the middle layer]
Tasks

View file

@ -77,8 +77,8 @@ parts of the tree (plugins, documentation, i18n, ...). We want to build up a
set of maintenance scripts in a ./admin dir.
At the moment we go for rather bleeding edge tools, because we want to stay at
a given version to avoid incompatibility problems.
Later on a version switch needs agreement/notification by all devs.
a given version to avoid incompatibility problems. Later on a version switch
needs agreement/notification by all devs.
@ -87,16 +87,15 @@ Comments
--------
I am always in favor of getting the basic project organization and all
scripting up and running very early in a project.
I would like if the project would take a rather conservative approach on the
required Libs and Tools, so that finally,
when we get into a beta state, we can run/compile on the major distros without
too much pain. I wouldn't completely
abandon the idea to target \*bsd and osx as well later on.
scripting up and running very early in a project. I would like if the project
would take a rather conservative approach on the required Libs and Tools, so
that finally, when we get into a beta state, we can run/compile on the major
distros without too much pain. I wouldn't completely abandon the idea to target
\*bsd and osx as well later on.
I would propose to move Doxygen to "required". The Idea to use scons sounds
quite appealing to me at the moment.
Besides that, I think it could be moved to "Draft".
quite appealing to me at the moment. Besides that, I think it could be moved to
"Draft".
-- link:Ichthyostega[] [[DateTime(2007-06-17T00:18:40Z)]]
Moved to Draft. For Developer documentation I would prefer doxygen. For user

View file

@ -46,7 +46,7 @@ Alternatives
Rationale
~~~~~~~~~
* To cut the administrative overhead down

View file

@ -21,12 +21,12 @@ implemented by taking advantage of this fact.
Description
~~~~~~~~~~~
There is a class of problems that this sort of behavior would help with.
There is a class of problems that this sort of behavior would help with.
First, you can organize a movie recursively. For example, you can create a
large movie file and organize it into Introduction, Chapter1, Chapter2, Climax,
and Conclusion. From there, you can edit Introduction EDL, then the Chapter1
EDL, and so forth.
EDL, and so forth.
From a bottom-up perspective, you can build a collection of Stock Footage (for
example, transformation scenes, lip sync frames, or maybe running joke). You
@ -35,7 +35,7 @@ your stock footage later once you have a better idea of what you want. From
there, the edits in these other files will still be in sync in the final render
of the big project. Further, each instance of Stock Footage can be personalized
by added effects on the timeline. Finally, one can create Stock Footage without
being forced to render the file to disk first.
being forced to render the file to disk first.
The usability benefits are obvious.
@ -83,7 +83,7 @@ Cons
Alternatives
~~~~~~~~~~~~
* Pre-Rendering Clips
* Pre-Rendering Clips
- Unlike the current proposal, you would be unable to reedit sock footage on
the mass scale and reapply it to the whole project.
- Moreover, rendering either introduces a generation loss or requires huge

View file

@ -63,7 +63,7 @@ Comments
--------
* You may have noted that I implemented an Appconfig class (for some very
elementary static configuration constants.
elementary static configuration constants.
See `common/appconfig.hpp` I choose to implement it as Meyers Singleton, so it
isn't dependent on global static initialisation, and I put the NOBUG_INIT call
there too, so it gets issued automatically.
@ -71,7 +71,7 @@ there too, so it gets issued automatically.
so count me to be much in support for this design draft. While some resources
can be pulled up on demand (and thus be a candidate for some of the many
singleton flavours), some things simply need to be set up once, and its
always better to do it explicitly and in a defined manner.
always better to do it explicitly and in a defined manner.
* For the proc layer, I plan to concentrate much of the setup and
(re)configuration within the loading of a session, and I intend to make the
session manager create an empty default session at a well defined point,
@ -102,10 +102,9 @@ lumiera_backend_init()
------------------------------------------------------------
Backend tests then only call `lumiera_backend_init()` and dont need to do the
whole initialization, same could be done
for `lumiera_proc_init()` and `lumiera_gui_init()`. Note about the library: i
think the lib shall not depend on such an init, but i would add one if really
needed.
whole initialization, same could be done for `lumiera_proc_init()` and
`lumiera_gui_init()`. Note about the library: i think the lib shall not depend
on such an init, but i would add one if really needed.
-- link:ct[] [[DateTime(2008-04-09T19:19:17Z)]]
* After reconsidering I think we have several different problems intermixed
@ -122,7 +121,7 @@ needed.
be avoided completely to run such already in the static intialisation phase
before entering main(). My current solution (putting NOBUG_INIT it in the
Appconfig ctor) is not airtight, I think we can't avoid going for something
like a schwartz counter here.
like a schwartz counter here.
- Then there is the initialisation of common serives. For these, it's just
fine to do a dedicated call from main (e.g. to init the backend services and
for creating the basic empty session for proc and firing off the event loop
@ -133,7 +132,7 @@ needed.
subsystem after this point. Currently, I have the policy for the proc layer
to require every destructor to be called and everything to be deallocated,
meaning that quite a lot of code is running after the end of main() -- most
of which is libarary generated.
of which is libarary generated.
-- link:Ichthyostega[] [[DateTime(2008-04-12T04:56:49Z)]]
* Regarding organisation of includes:... agreed
@ -172,7 +171,7 @@ needed.
- This system is extensible: for example I plan to let the
link:SessionManager[] issue ON_SESSION_INIT and ON_SESSION_CLOSE events.
E.g. AssetManager could now just install his callbacks to clean up the
internal Asset registry
internal Asset registry
-- link:Ichthyostega[] [[DateTime(2008-04-14T03:40:54Z)]]
* Regarding shutdown my understanding is that ON_GLOBAL_SHUTDOWN does what is

View file

@ -49,7 +49,7 @@ Comments
here and we need to be sure that nothing gets lost.
* For the time being this formalism is enough. Later on, I fear, we will need a
bit more (and some Tool support)
-- link:Ichthyostega[] [[DateTime(2007-06-17T00:24:14Z)]]
-- link:Ichthyostega[] [[DateTime(2007-06-17T00:24:14Z)]]
* Accepted, deployed, done ... Final
-- link:ct[] [[DateTime(2007-06-27T16:13:25Z)]]

View file

@ -47,20 +47,19 @@ aphphanumeric character (if it is numeric, just write it out):
These are used when the provider is a project and not an individual person.
If the provider of a interface is a individual person then he encodes his email
address in a similar way
The @ sign is encoded as uppercase "AT":
address in a similar way The @ sign is encoded as uppercase "AT":
------------------------------------------------------------
7of9@star-trek.net -> sevenofnineATstartreknet
------------------------------------------------------------
------------------------------------------------------------
Abstract identifiers
^^^^^^^^^^^^^^^^^^^^
As alternative method one can use his gpg (or pgp) key ids or full fingerprints.
These are encoded as uppercase 'PGP' or 'GPG' followed with a sequence of hex
digits (both upper and lower case allowed):
As alternative method one can use his gpg (or pgp) key ids or full
fingerprints. These are encoded as uppercase 'PGP' or 'GPG' followed with a
sequence of hex digits (both upper and lower case allowed):
------------------------------------------------------------
@ -86,8 +85,8 @@ entropy of 128 bits:
Following Parts: hierachic namespace
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Lumiera itself will use some hierachic naming scheme for it interface
declarations and implementations.
The details will be layed out next, genereally thinks look like:
declarations and implementations. The details will be layed out next,
genereally thinks look like:
------------------------------------------------------------
lumieraorg_backend_frameprovider
@ -117,13 +116,11 @@ The above described scheme will be implemented and used by me (cehteh).
Rationale
~~~~~~~~~
I believe that writing plugins for Lumiera shall be simple. We do not want some
central registry or management. Anyone
shall be able to just start to write plugins. But that puts some reponsibility
on the namespace so that all plugins can
coexist and their names don't clash. The above describes a very simple and
flexible nameing system which anyone can
follow. It produces names which should be sufficiently unique for practical
purposes. It leaves alternatives for
central registry or management. Anyone shall be able to just start to write
plugins. But that puts some reponsibility on the namespace so that all plugins
can coexist and their names don't clash. The above describes a very simple and
flexible nameing system which anyone can follow. It produces names which should
be sufficiently unique for practical purposes. It leaves alternatives for
providing plugins as institutuion, individual or even anonymously.

View file

@ -1,6 +1,6 @@
Design Process : Lumiera Design Process
=======================================
[grid="all"]
`------------`-----------------------
*State* _Final_

View file

@ -58,18 +58,18 @@ Solution for Lumiera
We are in need of a new development model which is acceptable by all involved
people and benefits from the way Cinelerra development worked the years before,
without maintaining the bad sides again:
without maintaining the bad sides again:
. *Make it easy to contribute*
Even if it is favorable when we have people which are continously working on
Lumiera, it's a fact that people show up, send a few patches and then
disappear. The development model should be prepared for this by:
.. Good documentation
.. Well defined design and interfaces
.. Well defined design and interfaces
.. Establish some coding guidelines to make it easy for others maintain code
written by others
.. Prefer known and simple aproaches/coding over bleeding edge and highly
complex techniques
complex techniques
. *Simple access*
We will use a fully distributed development model using git. I'll open a
@ -84,7 +84,7 @@ quality and when there is noone who keeps them up, will be removed. Since we
are working in a distributed way with each developer maintaining his own
repository and merging from other people, there is no easy way that bad code
will leap into the project.
. *No Rule is better than a Rule which is not engaged*
We have to agree on some rules to make teamwork possible. These rules should be
kept to a minimum required and accepted by all involved people. It is vital

View file

@ -19,15 +19,15 @@ Description
The Proc-Layer differentiates into a high-level view, which models the
properties of the problem domain (manipulating media objects), and a low-level
model, which is a network of render nodes and will be optimized for processing
efficiency.
efficiency.
In between sits the Builder, which is triggered on all important/relevant
changes to the high-level model.
changes to the high-level model.
The Builder inspects the current state of this high-level model and, driven by
the actual objects and their configuration, creates a corresponding
representation within the low-level model, which is then hot-swapped into the
renderer.
renderer.
In the course of this building process, all necessary decisions are taken,
disabled features and impossible connections are detected and left out, and all
@ -37,7 +37,7 @@ low-level model.
The configuration of the high-level model is deliberately very open; the
builder doesn't impose much limitations, rather he reflects the found
configuration down into the low-level model using generic rules.
configuration down into the low-level model using generic rules.
Pros
^^^^
@ -70,7 +70,7 @@ the Cinelerra-2 codebase.
Conclusion
----------
This proposal reflects a distinct approach taken right from start.
This proposal reflects a distinct approach taken right from start.
Marked 'final' at October.2008 developer meeting

View file

@ -34,10 +34,9 @@ far incomplete
The cinelerra2 sources are put into oldsrc on a per-case base.
We want to use the new GIT feature of "Superprojects and Submodules" when it is
ready for general use.
Then we will transform several subtrees into separate GIT repos which will be
linked to from the main
Project (then called the "Superproject") as submodules.
ready for general use. Then we will transform several subtrees into separate
GIT repos which will be linked to from the main Project (then called the
"Superproject") as submodules.
@ -83,7 +82,7 @@ Comments
submodules here:
* oldsrc
* cin3
* prototype
* prototype
link:Ichthyostega[]
- Draft now.
- Yes I left source dirs out but this sounds fine, note that with git, there is

View file

@ -122,9 +122,9 @@ get this into professional production houses, then I think having a single
language from OS admin the whole way through the stack is a massive gain for
the types of users who will be using it. I personally prefer Ruby. Naturally
it's your decision to make, all the best, we are looking forward to alphas and
betas in the future
betas in the future
-- mytwocents
This proposal is about the ''required'' scripting language, i.e. when
accepted, Lua will be a necessary prerequisite for running Lumiera. This
doesn't rule out the ''use'' of other scripting languages. We strive at

View file

@ -27,7 +27,7 @@ Description
* Time measurement is zero based (of course :-? )
. *Quantizing to a frame index or similar*
* Quantizing/rounding shall happen only once at a defined point in the
calculation chain and, if in doubt, be done always as late as possible.
calculation chain and, if in doubt, be done always as late as possible.
* Values needing to be quantized to time (grid) positions are calculated by
half-way rounding, but the result should not depend on the actual
zero-point of the scale (i.e. `floor(0.5+val)`, thus quant(0.5) yields 1,
@ -36,7 +36,7 @@ Description
* Frame numbers are zero based and Frame 0 starts at time=0 (or whatever the
nominal start time is)
* Each frame starts when the locator hits its lower border (inclusively) and
ends when the locator is on its upper border (exclusively)
ends when the locator is on its upper border (exclusively)
image:images/Lumi.FramePositions1.png[]
* When the locator snaps to frames this means it can be placed on the start
positions of the frames solely
@ -70,15 +70,13 @@ Tasks
Alternatives
~~~~~~~~~~~~
Leave everything as in Cinelerra2, i.e. show frames after the locator has
passed over them,
behave different when playing backwards and set the keyframes on the position
of the locator but use them on the
frame actually to be shown (which differs according to the playback direction
but is always "one off").
passed over them, behave different when playing backwards and set the keyframes
on the position of the locator but use them on the frame actually to be shown
(which differs according to the playback direction but is always "one off").
Why not? because it makes frame-precise working with keyframes a real pain and
even creates contradictory situations when you
switch back and forward while tweaking.
even creates contradictory situations when you switch back and forward while
tweaking.
Similar for the issues with quantized values. At first sight, e.g. directly
using the frame numbers as coordinates (as Cinelerra does) seems to be clever,
@ -110,7 +108,8 @@ sticking to "simple" values like frame counts or even a session-wide frame rate
Comments
~~~~~~~~
* This is the summary of a discussion cehteh, Plouj and ichthyo just had on irc.
* This is the summary of a discussion cehteh, Plouj and ichthyo just had on
irc.
-- link:Ichthyostega[] [[DateTime(2007-06-21T05:12:03Z)]]
* We use GAVL now (needs to be included in the build system)
@ -133,7 +132,7 @@ Comments
Conclusion
~~~~~~~~~~
* The adapted form of this proposal was *accepted* by October.2008 developer
meeting.
meeting.
* The proposed thin library layer to centralize time calculations shall be
added on demand. When doing so, we need to add thorough test coverage for

View file

@ -4,8 +4,8 @@ Design Process : Application Structure
[grid="all"]]
`------------`----------------------
*State* _Dropped_
*Date* _2008-11-05_
*Proposed by* link:ct[]
*Date* _2008-11-05_
*Proposed by* link:ct[]
------------------------------------
Application Structure
@ -113,9 +113,9 @@ Alternatives
^^^^^^^^^^^^
We discussed the startup/main() through the GUI as it is currently done, it
would be also possible to produce some more
executables (lumigui, luminode, lumiserver, ....). But I think we agreed that a
common loader is the best way to go.
would be also possible to produce some more executables (lumigui, luminode,
lumiserver, ....). But I think we agreed that a common loader is the best way
to go.
Rationale
@ -131,7 +131,7 @@ changes we can not forsee yet.
Comments
--------
We discussed this issue lately on IRC and I got the feeling we pretty much
agreed on it.
agreed on it.
* we don't want to build a bunch of specialized executables, rather we build
one core app which pulls up optional parts after parsing the config
@ -148,22 +148,20 @@ an architecture based on abstractions and exploiting the proven design
patterns.
It has that flexibility, yes. But that means not that we have to abuse it in
any way. The main() there and thus the bootstrap of the application is
under our tight control, if we want to reject scriptable/highly configurable
any way. The main() there and thus the bootstrap of the application is under
our tight control, if we want to reject scriptable/highly configurable
bootstrapping there then we can just do so. Thats more a social than a
technical decision. I personally don't like if a design is 'nannying' and puts
too much constraints into unforeseen areas. If the computer can do some
task better than we, it shall do it. This still means that I want to stay very
much in control, it should only do some tedious, error-prone managing
tasks for me. For example the interfaces system already tracks
inter-dependencies between plugins and interfaces automatically, without the
programmer
needs to care or define anything. The interface system gets it right and we
wont need to care for the order initialization. I added that because I
consider such as absolutely important for plugins which might be supplied by
third parties where we have no control over. But I now realized that we can
nicely use that for our own internal things too. Imo thats some very valuable
service.
too much constraints into unforeseen areas. If the computer can do some task
better than we, it shall do it. This still means that I want to stay very much
in control, it should only do some tedious, error-prone managing tasks for me.
For example the interfaces system already tracks inter-dependencies between
plugins and interfaces automatically, without the programmer needs to care or
define anything. The interface system gets it right and we wont need to care
for the order initialization. I added that because I consider such as
absolutely important for plugins which might be supplied by third parties where
we have no control over. But I now realized that we can nicely use that for our
own internal things too. Imo thats some very valuable service.
-- link:ct[] [[DateTime(2008-11-08T06:26:18Z)]]
Some further minor details: We didn't finish the discussion about namespaces on
@ -188,11 +186,11 @@ remaining options as a vector of std::strings. Please have a look at
http://git.lumiera.org/gitweb?p=LUMIERA;a=blob;f=tests/common/mainsuite.cpp;h=45
bfd98effd0b7dbe6597f712a1bdfa35232308;hb=HEAD[the test class runner main] for
an usage example. I really want our Lumiera main to be clean and expressive in
the way showed there.
Probably the most important part of the startup is pulling up the session core;
because of that I think most of the startup process falls into the realm of the
Proc-Layer. Within Proc, I don't want any significant string manipulations done
with C-strings and I don't want raw arrays when we can use std::vector.
the way showed there. Probably the most important part of the startup is
pulling up the session core; because of that I think most of the startup
process falls into the realm of the Proc-Layer. Within Proc, I don't want any
significant string manipulations done with C-strings and I don't want raw
arrays when we can use std::vector.
-- link:Ichthyostega[] [[DateTime(2008-11-06T19:28:13Z)]]
I 'dropped' this now because we do it somewhat differently now and I dont want

View file

@ -41,7 +41,7 @@ Tasks
Not yet decided:
* _tests_ move them into the _src/$subsystem_ as symlink?
* _src/tool_
* _src/tool_
Pros

View file

@ -33,7 +33,7 @@ present in open source Non Linear Video editors (your mileage may vary) :
. Lack of support for certain video formats or codecs
. Lack of documentation
. Lack of cross-platform support
. Dependency on scripted languages like Python, which make installation a mess
. Dependency on scripted languages like Python, which make installation a mess
I will expand on the problems and their proposed (or mandatory) solutions.
@ -48,7 +48,7 @@ I will expand on the problems and their proposed (or mandatory) solutions.
*Solution* Isolating the UI from the rendering and data handling (also
improves the extensibility)
*Required* Yes
*Workarounds* Auto-save (however it's not a real solution for the problem)
*Workarounds* Auto-save (however it's not a real solution for the problem)
--------------------------------------------------------------------
Working with multimedia (video / audio) editing is a magnet for segfaults
@ -88,8 +88,8 @@ countless, imagine persistent, selective undo and so on. Any other format
(cinelerra2 XML, MXF, ...) will be realized by importer/exporter plugins.
-- link:ct[] [[DateTime(2008-04-21T11:27:23Z)]]
2. Reinventing the wheel for every new project
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -151,7 +151,7 @@ processes):
processing toolkits (DirectX, GStreamer, etc). This also makes the project
cross-platform. Tiers 1 and 2 can go in one process, and the 3 and 4 in
another (this would make tier 2 a library which defines a C++ Class, and tier
4 would also be a library which is used by the rendering engine).
4 would also be a library which is used by the rendering engine).
By separating the tiers, these can later become their own projects and overall
the community would receive great benefits.
@ -267,11 +267,10 @@ Quote from Ohloh.net: (http://www.ohloh.net/projects/lumiera)[]
------------------------------------------------------------
Extremely well-commented source code
Lumiera is written mostly in C++.
Across all C++ projects on Ohloh, 23% of all source code lines are comments.
For Lumiera, this figure is 46%.
This very impressive number of comments puts Lumiera among the best 10% of all
C++ projects on Ohloh.
Lumiera is written mostly in C++. Across all C++ projects on Ohloh, 23% of all
source code lines are comments. For Lumiera, this figure is 46%. This very
impressive number of comments puts Lumiera among the best 10% of all C++
projects on Ohloh.
------------------------------------------------------------
@ -372,11 +371,11 @@ similarly runs on windows and *nix. Well. You could try to write it in Java.
See my point? While today it's quite feasible to write office stuff or banking
applications in a cross-platform manner, a video editor still is a different
kind of a beast.
A similar argumentation holds true for the question, wether or not to use
separate processes and IPC. While it certainly is a good idea to have the X
server or a database running in a separate process, the situation is really
quite different for editing video. Hopefully it's clear why.
quite different for editing video. Hopefully it's clear why.
Could you please rework this Design Entry in a way that we can finalize
(accept) it?

View file

@ -53,7 +53,7 @@ Tasks
instructions. Core Duo supports upto SSE4 instructions. AMD announced SSE5
instructions to come in 2009.
* Consider SIMD instructions while designing the Render Nodes and Effects
architecture.
architecture.
* Write the whole application in C/C++ / Lua while leaving sections to optimize
in assembly later. (Probably simple tasks or a library written in C)
* Rewrite these sections in Assembly using only instructions we agreed upon.
@ -67,7 +67,7 @@ Assuming we go all the way with an official assembly language / platform...
* Significantly faster render and previews. (Even when using a high-level
library like http://www.pixelglow.com/macstl/valarray/[macstl valarray], we
can get 3.6x -- 16.2x the speed in our inner loop. We can probably expect
greater if we hand-optimize the assembly)
greater if we hand-optimize the assembly)
Cons

View file

@ -41,13 +41,12 @@ Tasks
Pros
~~~~
Programming Unit and Self tests in a Scripting language facillates this task.
Programming Unit and Self tests in a Scripting language facillates this task.
The X-Language bindings are quite usable today. As a side effect, it helps to
get a clean program structure, because the tests need some Interface and/or
some object factories to create the test
candidates. Python is proposed, because it is fairly mainstream, has a flat
learning curve and but is moderately modern and functional-style at the same
time.
some object factories to create the test candidates. Python is proposed,
because it is fairly mainstream, has a flat learning curve and but is
moderately modern and functional-style at the same time.
Cons
~~~~

View file

@ -1,4 +1,4 @@
Design Process : Clip Cataloging System
Design Process : Clip Cataloging System
=======================================
[grid="all"]
@ -8,7 +8,7 @@ Design Process : Clip Cataloging System
*Proposed by* link:JordanN[]
-------------------------------------
Clip Cataloging System
Clip Cataloging System
-----------------------
A system for storing, organizing, and retrieving assets, such as images and
@ -105,15 +105,15 @@ An additional benefit to using "library" managers, is that it can handle
interloans, referencing of "other" (people's/organization's) libraries,
numbering systems, descriptions, and classifications, thousands to millions of
items, search systems, review and comment systems, plus the benefits of open
source that allow the expansion of features easily.
The use of task oriented programs in this way, makes use of established code,
that has been developed by experts in their field. Any database system would be
useful for managing all these media. But one that has been developed by the
people that have been working with cataloging systems for a long time is likely
to do well. Plus it can be readily improved, by people who do not have to know
the first thing about how to design video editing programs. The program also
gets improved because of it own community, which adds features or performance
to Lumiera, without even having to "drive" the development..
source that allow the expansion of features easily. The use of task oriented
programs in this way, makes use of established code, that has been developed by
experts in their field. Any database system would be useful for managing all
these media. But one that has been developed by the people that have been
working with cataloging systems for a long time is likely to do well. Plus it
can be readily improved, by people who do not have to know the first thing
about how to design video editing programs. The program also gets improved
because of it own community, which adds features or performance to Lumiera,
without even having to "drive" the development..
--link:Tree[][[DateTime(2008-08-27T20:38:00NZ)]].
''''

View file

@ -26,9 +26,9 @@ Additionally, a lot of great concepts for how to streamline the interface are
derived in part from link:KPhotoAlbum[].
I use tags, keywords, and metadata almost interchangeably, with the exception
that metadata includes computer generated metadata as well.
These are not tags in the conventional sense -- they don't have to be text. In
fact the planned support (please add more!) is:
that metadata includes computer generated metadata as well. These are not tags
in the conventional sense -- they don't have to be text. In fact the planned
support (please add more!) is:
* Text -- both simple strings (tags) and blocks
* Audio -- on the fly (recorded from the application) or pregenerated
@ -51,10 +51,9 @@ applied to it.
Two key functions: assign metadata and filter by metadata.
clips are one thing; but in reality most clips are much longer than their
interesting parts.
Especially for raw footage, the interesting sections of a clip can be very slim
compared to
the total footage. Here is a typical workflow for selecting footage:
interesting parts. Especially for raw footage, the interesting sections of a
clip can be very slim compared to the total footage. Here is a typical workflow
for selecting footage:
. Import footage.
. Remove all footage that is technically too flawed to be useful.
@ -193,7 +192,7 @@ Multiple cuts
There is no need to export a final cut from this application; it merely is the
first step in the post-production chain. It is the missing link between
receiving raw footage from the camera and adding the well executed scenes to
the timeline. What should come out of the application is a classification of
the timeline. What should come out of the application is a classification of
Situational, take, and instance tagging
@ -250,16 +249,14 @@ get ideas out there.
key commands
mutt/vim-style -- much faster than using a mouse, though GUI supported.
Easy to map to joystick, midi control surface, etc.
Space stop/start and tag enter
Tab (auto pause) adds metadata special
Tracks have letters within scenes -- Audio[a-z], Video[a-z], Other[a-z] (these
are not limits) -- or names.
Caps lock adds notes. This is really, really fast. It works anywhere.
This means that up to 26 different overlapping metadata sections are allowed.
Space stop/start and tag enter Tab (auto pause) adds metadata special Tracks
have letters within scenes -- Audio[a-z], Video[a-z], Other[a-z] (these are not
limits) -- or names. Caps lock adds notes. This is really, really fast. It
works anywhere. This means that up to 26 different overlapping metadata
sections are allowed.
Prompting
Prompting for metadata is a laborious, time-consuming process. There is no
truly efficient way to do it. This application uses a method similar to
Prompting Prompting for metadata is a laborious, time-consuming process. There
is no truly efficient way to do it. This application uses a method similar to
link:KPhotoAlbum[]. When the space key is held and a letter is pressed, the tag
that corresponds to that letter is assigned to the track for the duration of
the press. (If the space is pressed and no other key is pressed at the same
@ -339,9 +336,8 @@ the video. It would be helpful to have the ability to create continuous ratings
over the entire track. Ratings would be numerical. Automatic clip
selection/suggestion could be generated by using algorithms to compute the
usefulness of video based on these ratings (aswell as "boolean
operations"/"binary decisions" done with tags).
The ratings could be viewed just like levels are - color coded and ovelayed on
track thumbnails.
operations"/"binary decisions" done with tags). The ratings could be viewed
just like levels are - color coded and ovelayed on track thumbnails.
- Tree 2008-10-25

View file

@ -34,7 +34,8 @@ Tasks
* we need to work out an introspection mechanism for parameters
- asses what different types of parameters we need
- find out how much structured parameters will be (do simple values suffice?)
- find out how much structured parameters will be (do simple values
suffice?)
- define how parameters can be discovered/enumerated
- define a naming scheme for parameters, so they can be addressed
unambiguously
@ -50,12 +51,11 @@ So...
. chose a best fitting implementation based on this information
A closely related issue is the handling of *Automation*. The current draft
calls for an abstract interface "ParamProvider",
which just allows the link:Plugin/RenderComponent[] to pull a current value,
without knowing if the ParamProvider is a GUI widget or
an automation data set with interpolation. The component using the param value
should not need to do any interpolation.
We should re-asses and refine this draft as needed. Note: Render Nodes are
calls for an abstract interface "ParamProvider", which just allows the
link:Plugin/RenderComponent[] to pull a current value, without knowing if the
ParamProvider is a GUI widget or an automation data set with interpolation. The
component using the param value should not need to do any interpolation. We
should re-asses and refine this draft as needed. Note: Render Nodes are
stateless; this creates some tricky situations.

View file

@ -9,7 +9,7 @@
Design the Render Nodes interface
---------------------------------
In the current design, the low-level model is comprised of "Render Nodes";
Proc-Layer and Backend carry out some colaboration based on this node network.
Proc-Layer and Backend carry out some colaboration based on this node network.
+
Three different interfaces can be identified
* the node wiring interface
@ -43,7 +43,7 @@ preselected to use a combination of specific working modes:
* participate in caching
* calculate in-place
* source reading
* source reading
* (planned) use hardware acceleration
* (planned) remote dispatched calculation

View file

@ -9,11 +9,10 @@
Overview Engine Interface(s)
----------------------------
At the Engine Interfaces, Lumiera's Backend and Session get connected and work
together to produce rendered output.
This design proposal intends to give an overview of the connection points and
facilities involved,
to define some terms and concepts and to provide a foundation for discussion
and working out the APIs in detail.
together to produce rendered output. This design proposal intends to give an
overview of the connection points and facilities involved, to define some terms
and concepts and to provide a foundation for discussion and working out the
APIs in detail.
@ -33,11 +32,10 @@ Participants
Render Process
~~~~~~~~~~~~~~
The render process brackets an ongoing calculation as a whole. It is not to be
confused with a operating system
process or thread; rather it is a point of reference for the relevant entities
in the GUI and Proc-Layer in need
to connect to such a "rendering", and it holds the specific definitions for
this calculation series. A render process
confused with a operating system process or thread; rather it is a point of
reference for the relevant entities in the GUI and Proc-Layer in need to
connect to such a "rendering", and it holds the specific definitions for this
calculation series. A render process
_corresponds to a single data stream_ to be rendered. Thus, when the play
controller of some timeline in the model is
in _playing_ or _paused_ state, typically multiple corresponding render
@ -54,45 +52,40 @@ processes exist.
.Process parameters
A process is linked to a single stream data format (a ->
link:StreamTypeSystem.html[stream implementation type]). +
It is configured with _frame quantisation_ and _timings_, and a _model port_
identifier and _channel selector_.
link:StreamTypeSystem.html[stream implementation type]). + It is configured
with _frame quantisation_ and _timings_, and a _model port_ identifier and
_channel selector_.
quantisation:: translates time values into frame numbers. (In the most general
case this is a function, connected to the session)
timings:: a definition to translate global model time units in real clock
time, including _alignment_ to an external time grid.
model port:: a point in the (high level) model where output can be produced. +
case this is a function, connected to the session) timings:: a definition to
translate global model time units in real clock time, including _alignment_ to
an external time grid. model port:: a point in the (high level) model where
output can be produced. +
This might be a global pipe in one of the model's timelines, or
it might be a _probe point_.
channel:: within the session and high level model, details of the stream
implementation are abstracted. Typically,
a global pipe (master bus or subgroup) corresponds to a multichannel
stream, and each of these channels
might be hooked up to an individual render process (we have to work
out if that's _always the case_ or just
under _some circumstances_)
stream, and each of these channels might be hooked up to an
individual render process (we have to work out if that's _always the
case_ or just under _some circumstances_)
[NOTE]
===================
While certainly the port and channel definition is fixed, unfortunately the
quantisation and the timings are'nt.
The timings may be changed in the middle of an ongoing render process, due to
changed playback speed, shuffling
or requirements forwarded from chase-and-lock synchronisation to an external
source. We still need to discuss if
Lumiera is going to support variable framerates (several media professionals
I've talked to were rather positive
we need to support that -- personally I'm still in doubt we do). Variable
framerates force us to determine the frame numbers
by an integration over time from a start position up to the time position in
question. The relevant data to be integrated
is located in the session / high-level model; probably we'll then create an
excerpt of this data, but still the less
quantisation will be a function of time. Anyway, it is the render processes job
to translate all kinds of parameter
changes into relevant internal API calls to reconfigure the calculation process
to fit.
quantisation and the timings are'nt. The timings may be changed in the middle
of an ongoing render process, due to changed playback speed, shuffling or
requirements forwarded from chase-and-lock synchronisation to an external
source. We still need to discuss if Lumiera is going to support variable
framerates (several media professionals I've talked to were rather positive we
need to support that -- personally I'm still in doubt we do). Variable
framerates force us to determine the frame numbers by an integration over time
from a start position up to the time position in question. The relevant data to
be integrated is located in the session / high-level model; probably we'll then
create an excerpt of this data, but still the less quantisation will be a
function of time. Anyway, it is the render processes job to translate all kinds
of parameter changes into relevant internal API calls to reconfigure the
calculation process to fit.
===================
@ -100,18 +93,15 @@ to fit.
Engine Model (low-level Model)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The low level model is a network of interconnected render nodes. It is created
by the build process to embody any
configuration, setup and further parametrisation derived from the high-level
description within the session.
But the data structure of this node network is _opaque_ and considered an
implementation detail. It is not
intended to be inspected and processed by outward entities (contrast this to
the high-level model within the
session, which provides an extensive discovery API and can be manipulated by
model mutating commands). We
just provide a set of _query and information retrieval functions_ to suit the
needs of the calculation process.
The engine model is _not persisted._
by the build process to embody any configuration, setup and further
parametrisation derived from the high-level description within the session. But
the data structure of this node network is _opaque_ and considered an
implementation detail. It is not intended to be inspected and processed by
outward entities (contrast this to the high-level model within the session,
which provides an extensive discovery API and can be manipulated by model
mutating commands). We just provide a set of _query and information retrieval
functions_ to suit the needs of the calculation process. The engine model is
_not persisted._
* the engine model is partitioned by a _segmentation_ of the time axis.
Individual segments can be hot-swapped.
@ -120,15 +110,13 @@ The engine model is _not persisted._
alignment constraints.
Thus, for any pair (port, time) it is possible to figure out a segment and an
exit node to serve this position.
The segmentation(s) for multiple ports might differ. To allow for effective
dispatching, the model should provide
exit node to serve this position. The segmentation(s) for multiple ports might
differ. To allow for effective dispatching, the model should provide
convenience functions to translate these informations into frame number ranges.
The mentioned quantisation and alignment
constraints stem from the fact that the underlying media source(s) are
typically themselves quantised and the timings
might be manipulated within the processing chain. We might or might not be able
to shift the underlying media source
The mentioned quantisation and alignment constraints stem from the fact that
the underlying media source(s) are typically themselves quantised and the
timings might be manipulated within the processing chain. We might or might not
be able to shift the underlying media source
(it might be a live input or it might be tied to a fixed timecode)
@ -136,26 +124,21 @@ to shift the underlying media source
Processing Node
~~~~~~~~~~~~~~~
In this context, a node is a conceptual entity: it is an elementary unit of
processing. It might indeed be a single
invocation of a _processor_ (plugin or similar processing function), or it
might be a chain of nodes, a complete
processing. It might indeed be a single invocation of a _processor_ (plugin or
similar processing function), or it might be a chain of nodes, a complete
subtree, it might _represent_ a data source (file, external input or peer in
case of distributed rendering), or
it might stand for a pipeline implemented in hardware. The actual decision
about these possibilities happened
during the build process and can be configured by rules. Information about
these decisions is retained only
insofar it is required for the processing, most of the detailed type
information is discarded after the
wiring and configuration step. As mentioned above, each node serves two
distinct purposes, namely to
assist with the planning and dispatching, and to pull data by performing the
case of distributed rendering), or it might stand for a pipeline implemented in
hardware. The actual decision about these possibilities happened during the
build process and can be configured by rules. Information about these decisions
is retained only insofar it is required for the processing, most of the
detailed type information is discarded after the wiring and configuration step.
As mentioned above, each node serves two distinct purposes, namely to assist
with the planning and dispatching, and to pull data by performing the
calculations.
Nodes can be considered _stateless_ -- pulling a node has no effect outside the
invocation context.
While a node _might_ actually be configured to drive a whole chain or subtree
and propagate the pull request
invocation context. While a node _might_ actually be configured to drive a
whole chain or subtree and propagate the pull request
_within_ this tree or chain internally, the node _never propagates a pull
request beyond its realm._ The pull()
call expects to be provided with all prerequisite data, intermediary and output
@ -165,26 +148,23 @@ buffers.
Dispatching Step
~~~~~~~~~~~~~~~~
The dispatcher translates a render process into sequences of node invocations,
which then can be analysed
further (including planning the invocation of prerequisites) and scheduled.
This mapping is assisted by
the engine model API (to find the right exit node in the right segment), the
render process (for quantisation)
and the involved node's invocation API (to find the prerequisites)
which then can be analysed further (including planning the invocation of
prerequisites) and scheduled. This mapping is assisted by the engine model API
(to find the right exit node in the right segment), the render process (for
quantisation) and the involved node's invocation API (to find the
prerequisites)
Node Invocation API
~~~~~~~~~~~~~~~~~~~
As nodes are stateless, they need to be embedded into an invocation context in
order to be of any use. +
The node invocation has two distinct stages and thus the invocation API can be
partitioned in two groups
order to be of any use. + The node invocation has two distinct stages and thus
the invocation API can be partitioned in two groups
Planning
^^^^^^^^
During the planning phase, the dispatcher retrieves various informations
necessary to _schedule_ the following
pull call. These informations include
necessary to _schedule_ the following pull call. These informations include
* reproducible invocation identifier, usable to label frames for caching
* opaque source identifier (owned by the backed) when this node represents a
@ -236,8 +216,7 @@ Rationale
* allow to adjust the actual behaviour of the engine in a wide range, based on
actual measurements
* create a code structure able to support the foreseeable extensions (hardware
and distributed rendering)
without killing maintainability
and distributed rendering) without killing maintainability
@ -250,7 +229,7 @@ Comments
Conclusion
~~~~~~~~~~
* *accepted* / *dropped* by MMMMMM.YYYY developer meeting.
* *accepted* / *dropped* by MMMMMM.YYYY developer meeting.
////////////////////

View file

@ -10,7 +10,7 @@
Describe pluggable modules by a "Feature Bundle"
------------------------------------------------
This proposal builds upon Cehteh's Plugin Loader, which is the fundamental
mechanism for integrating variable parts into the application.
mechanism for integrating variable parts into the application.
It targets the special situation when several layers have to cooperate in order
to provide some pluggable functionality. The most prominent example are the
@ -20,8 +20,9 @@ effect
* the engine needs a processing function
* the builder needs description data
* the gui may need a custom control plugin
* and all together need a deployment descriptor detailing how they are related.
* and all together need a deployment descriptor detailing how they are
related.
@ -29,20 +30,18 @@ Description
~~~~~~~~~~~
The Application has a fixed number of *Extension Points*. Lumiera deliberately
by design does _not build upon a component architecture_ -- which means that
plugins
can not themselves create new extension points and mechanisms. New extension
points
are created by the developers solely, by changing the code base. Each extension
point can be addressed by a fixed textual ID, e.g. "Effect", "Transition", ....
plugins can not themselves create new extension points and mechanisms. New
extension points are created by the developers solely, by changing the code
base. Each extension point can be addressed by a fixed textual ID, e.g.
"Effect", "Transition", ....
Now, to provide a pluggable extension for such an Extension Point, we use a
*Feature Bundle*
Such a Feature Bundle is comprised of
*Feature Bundle* Such a Feature Bundle is comprised of
* a Deployment Descriptor (provided as "structured data" -- TODO: define the
actual data format)
* the corresponding resources mentioned by this Deployment Descriptor
The Deployment Descriptor contains
* Metadata describing the Feature Bundle
@ -72,83 +71,61 @@ The Deployment Descriptor contains
- Additional Metadata depending on Type of Resource (e.g. the language of a
script)
We do _not_ provide a meta-language for defining requirements of an Extension
Point,
rather, each extension point has hard wired requirements for a Feature Bundle
targeted
at this extension point. There is an API which allows code within lumiera to
access
the data found in the Feature Bundle's Deployment Descriptor. Using this API,
the code
operating and utilizing the Extension Point has to check if a given feature
bundle is
usable.
Point, rather, each extension point has hard wired requirements for a Feature
Bundle targeted at this extension point. There is an API which allows code
within lumiera to access the data found in the Feature Bundle's Deployment
Descriptor. Using this API, the code operating and utilizing the Extension
Point has to check if a given feature bundle is usable.
It is assumed that these Feature Bundles are created / maintained by a third
party,
which we call a *Packager*. This packager may use other resources from different
sources and assemble them as a Feature Bundle loadable by Lumiera. Of course,
Lumiera
will come with some basic Feature Bundles (e.g. for colour correction, sound
panning,....)
which are maintained by the core dev team. (please don't confuse the "packager"
mentioned here
with the packager creating RPMs or DEBs or tarballs for installation in a
specific distro).
Additionally, we may allow for the auto-generation of Feature Bundles for some
simple cases,
party, which we call a *Packager*. This packager may use other resources from
different sources and assemble them as a Feature Bundle loadable by Lumiera. Of
course, Lumiera will come with some basic Feature Bundles (e.g. for colour
correction, sound panning,....) which are maintained by the core dev team.
(please don't confuse the "packager" mentioned here with the packager creating
RPMs or DEBs or tarballs for installation in a specific distro). Additionally,
we may allow for the auto-generation of Feature Bundles for some simple cases,
if feasible (e.g. for LADSPA plugins).
The individual resources
^^^^^^^^^^^^^^^^^^^^^^^^
In most cases, the resources referred by a Feature Bundle will be Lumiera
Plugins. Which means,
there is an Interface (with version number), which can be used by the code
within lumiera for
accessing the functionality. Besides, we allow for a number of further plugin
architectures
which can be loaded by specialized loader code found in the core application.
E.g. Lumiera
will probably provide a LADSPA host and a GStreamer host. If such an adapter is
applicable
depends on the specific Extension point.
Plugins. Which means, there is an Interface (with version number), which can be
used by the code within lumiera for accessing the functionality. Besides, we
allow for a number of further plugin architectures which can be loaded by
specialized loader code found in the core application. E.g. Lumiera will
probably provide a LADSPA host and a GStreamer host. If such an adapter is
applicable depends on the specific Extension point.
The ResourceID is the identifyer by which an Extension point tries to find
required resources.
For example, the Extension Point "Effect" will try to find an ResourceID called
"ProcFunction".
There may be several Entries for the same ResourceID, but with distinct SubID.
This can be used
to provide several implementations for different platforms. It is up to the
individual Extension
required resources. For example, the Extension Point "Effect" will try to find
an ResourceID called "ProcFunction". There may be several Entries for the same
ResourceID, but with distinct SubID. This can be used to provide several
implementations for different platforms. It is up to the individual Extension
Pont to impose additional semantic requirements to this SubID datafield. (Which
means: define
it as we go). Similarly, it is up to the code driving the individual Extension
point to define
when a Feature Bundle is fully usable, partially usable or to be rejected. For
example, an
means: define it as we go). Similarly, it is up to the code driving the
individual Extension point to define when a Feature Bundle is fully usable,
partially usable or to be rejected. For example, an
"Effect" Feature Bundle may be partially usable, even if we can't load any
"ProcFunction" for
the current platform, but it will be unusable (rejected) if the proc layer
can't access the
properties describing the media stream type this effect is supposed to handle.
can't access the properties describing the media stream type this effect is
supposed to handle.
Besides binary plugins, other types of resources include:
* a set of properties (key/value pairs)
* a script, which is executed by the core code using the Extension Point and
which in turn
may access certain interfaces provided by the core for "doing things"
which in turn may access certain interfaces provided by the core for "doing
things"
Probably there will be some discovery mechanism for finding (new) Feature
Bundles similar
to what we are planning for the bare plugins. It would be a good idea to store
the metadata
of Feature Bundles in the same manner as we plan to store the metadata of bare
plugins in
a plugin registry.
Bundles similar to what we are planning for the bare plugins. It would be a
good idea to store the metadata of Feature Bundles in the same manner as we
plan to store the metadata of bare plugins in a plugin registry.
@ -176,30 +153,20 @@ Use or adapt one of the existing component systems or invent a new one.
Rationale
~~~~~~~~~
The purpose of this framework is to decouple the core application code from the
details of
accessing external functionality, while providing a clean implementation with a
basic set of
sanity checks. Moreover, it allows us to create an unique internal description
for each
loaded module, and this description data e.g. is what is stored as an "Asset"
into the
user session.
details of accessing external functionality, while providing a clean
implementation with a basic set of sanity checks. Moreover, it allows us to
create an unique internal description for each loaded module, and this
description data e.g. is what is stored as an "Asset" into the user session.
Today it is well understood what is necessary to make a real component
architecture work.
This design proposal deliberately avoids to create a component architecture and
confines
itself to the bare minimum needed to avoid the common maintenance problems. As
a guideline,
for each flexibility available to the user or packager, we should provide
clearly specified
bounds which can be checked and enforced automatically. Because our main goal
isn't to
create a new platform, framework or programming language, it is sufficient to
allow the
architecture work. This design proposal deliberately avoids to create a
component architecture and confines itself to the bare minimum needed to avoid
the common maintenance problems. As a guideline, for each flexibility available
to the user or packager, we should provide clearly specified bounds which can
be checked and enforced automatically. Because our main goal isn't to create a
new platform, framework or programming language, it is sufficient to allow the
user to _customize_ things, while structural and systematic changes can be done
by the
lumiera developers only.
by the lumiera developers only.
@ -210,17 +177,13 @@ Comments
--------
From a fast reading, I like this, some things might get refined. For example
I'd strongly suggest to
make the Deployment Descriptor itself an Interface which is offered by a
plugin, all data will then
be queried by functions on this interface, not by some 'dataformat'. Also
Resource ID's and a lot
other metadata can be boiled down to interfaces: names, versions, uuid of these
instead reiventing
another system for storing metadata. My Idea is to make the
link:Plugin/Interface[] system self-describing
this will also be used to bootstrap a session on itself (by the serializer
which is tightly integrated)
-- link:ct[] [[DateTime(2008-09-04T09:28:37Z)]] 2008-09-04 09:28:37
I'd strongly suggest to make the Deployment Descriptor itself an Interface
which is offered by a plugin, all data will then be queried by functions on
this interface, not by some 'dataformat'. Also Resource ID's and a lot other
metadata can be boiled down to interfaces: names, versions, uuid of these
instead reiventing another system for storing metadata. My Idea is to make the
link:Plugin/Interface[] system self-describing this will also be used to
bootstrap a session on itself (by the serializer which is tightly integrated)
-- link:ct[] [[DateTime(2008-09-04T09:28:37Z)]] 2008-09-04 09:28:37
Back to link:Lumiera/DesignProcess[]

View file

@ -9,13 +9,12 @@ Design Process: Lumiera Forward Iterator
-------------------------------------
The situation focussed by this concept is when an API needs to expose a
sequence of results, values or objects,
instead of just yielding a function result value. As the naive solution of
passing an pointer or array creates
sequence of results, values or objects, instead of just yielding a function
result value. As the naive solution of passing an pointer or array creates
coupling to internals, it was superseded by the GoF
http://en.wikipedia.org/wiki/Iterator[Iterator pattern].
Iteration can be implemented by convention, polymorphically or by generic
programming; we use the latter approach.
http://en.wikipedia.org/wiki/Iterator[Iterator pattern]. Iteration can be
implemented by convention, polymorphically or by generic programming; we use
the latter approach.
Lumiera Forward Iterator concept
@ -25,7 +24,7 @@ An Iterator is a self-contained token value, representing the promise to pull a
sequence of data
- rather then deriving from an specific interface, anything behaving
appropriately _is a Lumiera Forward Iterator._
appropriately _is a Lumiera Forward Iterator._
- the client finds a typedef at a suitable, nearby location. Objects of this
type can be created, copied and compared.
- any Lumiera forward iterator can be in _exhausted_ (invalid) state, which
@ -42,11 +41,10 @@ sequence of data
Discussion
~~~~~~~~~~
The Lumiera Forward Iterator concept is a blend of the STL iterators and
iterator concepts found in Java, C#, Python and Ruby.
The chosen syntax should look familiar to C++ programmers and indeed is
compatible to STL containers and ranges.
To the contrary, while a STL iterator can be thought off as being just a
disguised pointer, the semantics of Lumiera Forward Iterators is deliberately
iterator concepts found in Java, C#, Python and Ruby. The chosen syntax should
look familiar to C++ programmers and indeed is compatible to STL containers and
ranges. To the contrary, while a STL iterator can be thought off as being just
a disguised pointer, the semantics of Lumiera Forward Iterators is deliberately
reduced to a single, one-way-off forward iteration, they can't be reset,
manipulated by any arithmetic, and the result of assigning to an dereferenced
iterator is unspecified, as is the meaning of post-increment and stored copies
@ -54,15 +52,14 @@ in general. You _should not think of an iterator as denoting a position_ --
just a one-way off promise to yield data.
Another notable difference to the STL iterators is the default ctor and the
+bool+ conversion.
The latter allows using iterators painlessly within +for+ and +while+ loops; a
default constructed iterator is equivalent
to the STL container's +end()+ value -- indeed any _container-like_ object
exposing Lumiera Forward Iteration is encouraged
to provide such an +end()+-function, additionally enabling iteration by
+std::for_each+ (or Lumiera's even more convenient
+bool+ conversion. The latter allows using iterators painlessly within +for+
and +while+ loops; a default constructed iterator is equivalent to the STL
container's +end()+ value -- indeed any _container-like_ object exposing
Lumiera Forward Iteration is encouraged to provide such an +end()+-function,
additionally enabling iteration by +std::for_each+ (or Lumiera's even more
convenient
+util::for_each()+).
Implementation notes
^^^^^^^^^^^^^^^^^^^^
*iter-adapter.hpp* provides some helper templates for building Lumiera Forward
@ -71,11 +68,9 @@ Implementation notes
- _IterAdapter_ is the most flexible variant, intended for use by custom
facilities.
An IterAdapter maintains an internal back-link to a facilitiy exposing an
iteration control API,
which is accessed through free functions as extension point. This iteration
control API is similar to C#,
allowing to advance to the next result and to check the current iteration
state.
iteration control API, which is accessed through free functions as extension
point. This iteration control API is similar to C#, allowing to advance to
the next result and to check the current iteration state.
- _RangeIter_ wraps two existing iterators -- usually obtained from +begin()+
and +end()+ of an STL container
embedded within the implementation. This allows for iterator chaining.
@ -84,25 +79,23 @@ Implementation notes
to be dereferenced automatically on access
Similar to the STL habits, Lumiera Forward Iterators should expose typedefs for
+pointer+, +reference+ and +value_type+.
Additionally, they may be used for resource management purposes by ``hiding'' a
ref-counting facility, e.g. allowing to keep a snapshot or restult set around
until it can't be accessed anymore.
+pointer+, +reference+ and +value_type+. Additionally, they may be used for
resource management purposes by ``hiding'' a ref-counting facility, e.g.
allowing to keep a snapshot or restult set around until it can't be accessed
anymore.
Tasks
^^^^^
The concept was implemented both for unit test and to be used on the
_QueryResolver_ facility; thus it can be expected
to show up on the session interface, as the _PlacementIndex_ implements
_QueryResolver_. QueryFocus also relies on that
interface for discovering session contents. Besides that, we need more
implementation experience.
_QueryResolver_ facility; thus it can be expected to show up on the session
interface, as the _PlacementIndex_ implements _QueryResolver_. QueryFocus also
relies on that interface for discovering session contents. Besides that, we
need more implementation experience.
Some existing iterators or collection-style interfaces should be retro-fitted.
See http://issues.lumiera.org/ticket/349[Ticket #349]. +
Moreover, we need to gain experience about mapping this concept down into a
flat C-style API.
See http://issues.lumiera.org/ticket/349[Ticket #349]. + Moreover, we need to
gain experience about mapping this concept down into a flat C-style API.
@ -118,23 +111,21 @@ Alternatives
Rationale
~~~~~~~~~
APIs should be written such as not tie them to the current implementation.
Exposing iterators is known to create
a strong incentive in this direction and thus furthers the creation of clean
APIs.
Exposing iterators is known to create a strong incentive in this direction and
thus furthers the creation of clean APIs.
Especially in Proc-Layer we utilise already several iterator implementations,
but without an uniform concept, these remain
just slightly disguised implementation types of a specific container. Moreover,
the STL defines various and very elaborate
iterator concepts. Ichthyo considers most of these an overkill and an outdated
aproach. Many modern programming languages build
with success on a very simple iterator concept, which allows once to pull a
sequence of values -- and nothing more.
but without an uniform concept, these remain just slightly disguised
implementation types of a specific container. Moreover, the STL defines various
and very elaborate iterator concepts. Ichthyo considers most of these an
overkill and an outdated aproach. Many modern programming languages build with
success on a very simple iterator concept, which allows once to pull a sequence
of values -- and nothing more.
Thus the idea is to formulate a concept in compliance with STL's forward
iterator -- but augmented by an stop-iteration test.
This would give us basic STL integration and look familiar to C++ and Java
programmers without compromising the clean APIs.
iterator -- but augmented by an stop-iteration test. This would give us basic
STL integration and look familiar to C++ and Java programmers without
compromising the clean APIs.

View file

@ -121,9 +121,9 @@ quite different working approaches, which obviously can have quite some impact
on the resulting style and rythm of the final movie. The distinguishing
property of the working style to be supported by the "marble mode" is that it
bypasses the state of creating and organizing clips, but rather directly
evolves the footage into the final cut.
This working style is dual to the common clip based approach, none of them is
superior or inferior, thus we should actively support both working styles.
evolves the footage into the final cut. This working style is dual to the
common clip based approach, none of them is superior or inferior, thus we
should actively support both working styles.

View file

@ -23,8 +23,8 @@ data processing (=rendering), the high-level model is what the user works upon
when performing edit operations through the GUI (or script driven in
"headless"). Its building blocks and combination rules determine largely what
structures can be created within the
http://www.lumiera.org/wiki/renderengine.html#Session[Session].
On the whole, it is a collection of
http://www.lumiera.org/wiki/renderengine.html#Session[Session]. On the whole,
it is a collection of
http://www.lumiera.org/wiki/renderengine.html#MObjects[media objects] stuck
together and arranged by
http://www.lumiera.org/wiki/renderengine.html#Placement[placements].
@ -49,12 +49,12 @@ to a spatial audio system). Other processing entities like effects and
transitions can be placed (attached) at the pipe, resulting them to be appended
to form this chain. Optionally, there may be a *wiring plug*, requesting the
exit point to be connected to another pipe. When omitted, the wiring will be
figured out automatically.
Thus, when making an connection _to_ a pipe, output data will be sent to the
*source port* (input side) of the pipe, wheras when making a connection _from_
a pipe, data from it's exit point will be routed to the destination.
Incidentally, the low-level model and the render engine employ _pull-based
processing,_ but this is rather of no relevance for the high-level model.
figured out automatically. Thus, when making an connection _to_ a pipe, output
data will be sent to the *source port* (input side) of the pipe, wheras when
making a connection _from_ a pipe, data from it's exit point will be routed to
the destination. Incidentally, the low-level model and the render engine employ
_pull-based processing,_ but this is rather of no relevance for the high-level
model.
image:images/high-level1.png[]
@ -124,12 +124,12 @@ image:images/high-level3.png[]
The Session contains several independent
http://www.lumiera.org/wiki/renderengine.html#EDL[EDL]s plus an output bus
section ( *global Pipes* ). Each EDL holds a collection of MObjects placed
within a *tree of tracks* .
Within Lumiera, tracks are a rather passive means for organizing media objects,
but aren't involved into the data processing themselves. The possibility of
nesting tracks allows for easy grouping. Like the other objects, tracks are
connected together by placements: A track holds the list of placements of its
child tracks. Each EDL holds a single placement pointing to the root track.
within a *tree of tracks* . Within Lumiera, tracks are a rather passive means
for organizing media objects, but aren't involved into the data processing
themselves. The possibility of nesting tracks allows for easy grouping. Like
the other objects, tracks are connected together by placements: A track holds
the list of placements of its child tracks. Each EDL holds a single placement
pointing to the root track.
As placements have the ability to cooperate and derive any missing placement
specifications, this creates a hierarchical structure throughout the session,
@ -229,7 +229,7 @@ consequences right from start. Besides, the observation is that the development
of non-mainstream media types like steroscopic (3D) film and really convincing
spatial audio (beyond the ubiquitous "panned mono" sound) is hindered not by
technological limitations, but by pragmatism preferring the "simple" hard wired
approach.
approach.

View file

@ -3,8 +3,8 @@
*State* _Idea_
*Date* _2008-03-06_
*Proposed by* link:Ichthyostega[]
-------------------------------------
-------------------------------------
Placement Metaphor used within the high-level view of Proc-Layer
----------------------------------------------------------------
@ -13,33 +13,33 @@ Proc-Layer (as being currently implemented by Ichthyo) is to utilize
''Placement'' as a single central metaphor for object association, location and
configuration within the high-level model. The intention is to prefer ''rules''
over fixed ''values.'' Instead of "having" a property for this and that, we
query for information when it is needed.
query for information when it is needed.
The proposed use of '''Placement''' within the proc layer spans several,
closely related ideas:
closely related ideas:
* use the placement as a universal means to stick the "media objects" together
and put them on some location in the timeline, with the consequence of a
unified and simplified processing.
unified and simplified processing.
* recognize that various ''location-like'' degrees of freedom actually form a
single ''"configuration space"'' with multiple (more than 3) dimensions.
single ''"configuration space"'' with multiple (more than 3) dimensions.
* distinguish between ''properties'' of an object and qualities, which are
caused by "placing" or "locating" the object in ''configuration space''
caused by "placing" or "locating" the object in ''configuration space''
- ''propetries'' belong to the object, like the blur value, the media source
file, the sampling/frame rate of a source
file, the sampling/frame rate of a source
- ''location qualities'' exist only because the object is "at" a given
location in the graph or space, most notably the start time, the output
connection, the layering order, the stereoscopic window depth, the sound
pan position, the MIDI instrument
pan position, the MIDI instrument
* introduce a ''way of placement'' independent of properties and location
qualities, describing if the placement ''itself'' is ''absolute, relative or
even derived''
even derived''
* open especially the possibility to ''derive'' parts of the placement from
the context by searching over connected objects and then up the track tree;
this includes the possibility of having rules for resolving unspecified
qualities.
qualities.
Description
~~~~~~~~~~~
@ -51,8 +51,8 @@ defining a property can be seen as ''placing'' the object to a specific
parameter value on one of these dimensions. While this view may be bewildering
at first sight, the important observation is that in many cases we don't want
to lock down any of those parameters completely to one fixed value. Rather, we
just want to ''limit'' some parameters.
just want to ''limit'' some parameters.
To give an example, most editing applications let the user place a video clip
at a fixed time and track. They do so by just assigning fixed values, where the
track number determines the output and the layering order. While this may seem
@ -62,97 +62,97 @@ than not it's not necessary to "nail down" a video clip -- rather, the user
wants it to start immediately after the end of another clip, it should be sent
to some generic output and it should stay in the layering order above some
other clip. But, as the editing system fails to provide the means for
expressing such relationships, we are forced to
work with hard values, resort to a bunch of macro features or even compensate
for this lack by investing additional resources in production organisation (the
latter is especially true for building up a movie sound track).
expressing such relationships, we are forced to work with hard values, resort
to a bunch of macro features or even compensate for this lack by investing
additional resources in production organisation (the latter is especially true
for building up a movie sound track).
On the contrary, using the '''Placement''' metaphor has the implication of
switching to a query-driven approach.
switching to a query-driven approach.
* it gives us one single instrument to express the various kinds of relations
* it gives us one single instrument to express the various kinds of relations
* the ''kind of placement'' becomes an internal value of the ''placement'' (as
opposed to the object)
* some kinds of placement can express rule-like relations in a natural fashion
opposed to the object)
* some kinds of placement can express rule-like relations in a natural fashion
* while there remains only one single mechanism for treating a bunch of
features in a unified manner
features in a unified manner
* plugins could provide exotic and advanced kinds of placement, without the
need of massively reworking the core.
When interpreting the high-level model and creating the low-level model,
Placements need to be ''resolved'',
resulting in a simplified and completely nailed-down copy of the session
contents, which this design calls »the '''Fixture'''«
Media Objects can be placed
need of massively reworking the core.
* fixed at a given time
When interpreting the high-level model and creating the low-level model,
Placements need to be ''resolved'', resulting in a simplified and completely
nailed-down copy of the session contents, which this design calls »the
'''Fixture'''«
Media Objects can be placed
* fixed at a given time
* relative to some reference point given by another object (clip, label,
timeline origin)
* as plugged into a specific output pipe (destination port)
* as attached directly to another media object
* to a fixed layer number
* layered above or below another reference object
* fixed to a given pan position in virtual sound space
* panned relative to the pan position of another object
timeline origin)
* as plugged into a specific output pipe (destination port)
* as attached directly to another media object
* to a fixed layer number
* layered above or below another reference object
* fixed to a given pan position in virtual sound space
* panned relative to the pan position of another object
Tasks
^^^^^
* currently just the simple standard case is drafted in code.
* currently just the simple standard case is drafted in code.
* the mechanism for treating placements within the builder is drafted in code,
but needs to be worked out to see the implications more clearly
but needs to be worked out to see the implications more clearly
* while this design opens endless possibilities, it is not clear how much of
it should be visible through the GUI
it should be visible through the GUI
Pros
^^^^
* with just one concept, we get a lot of issues right, which many conventional
approaches fail to solve satisfactory
* one grippy metaphor instead of several special treatments
* includes the simple standard case
* unified treatment
* modular and extensible
approaches fail to solve satisfactory
* one grippy metaphor instead of several special treatments
* includes the simple standard case
* unified treatment
* modular and extensible
* allows much more elaborate handling of media objects then the conventional
approach, while both the simple standard case and the elaborate special case
are "first class citizens" and completely integrated in all object
treatment.
treatment.
Cons
^^^^
* difficult to grasp, breaks with some habits
* requires a separate resolution step
* difficult to grasp, breaks with some habits
* requires a separate resolution step
* requires to ''query'' for object properties instead of just looking up a
fixed value
fixed value
* forces the GUI to invent means for handling object placement which may go
beyond the conventional
beyond the conventional
* can create quite some surprises for the user, especially if he doesn't care
to understand the concept up front
to understand the concept up front
Alternatives
^^^^^^^^^^^^
Use the conventional approach
* media objects are assigned with fixed time positions
* they are stored directly within a grid (or tree) of tracks
* layering and pan are hard wired additional properties
* implement an additional auto-link macro facility to attach sound to video
* implement a magnetic snap-to for attaching clips seamless after each other
* implement a splicing/sliding/shuffling mode in the gui
* provide a output wiring tool in the GUI
* provide macro features for this and that....
* media objects are assigned with fixed time positions
* they are stored directly within a grid (or tree) of tracks
* layering and pan are hard wired additional properties
* implement an additional auto-link macro facility to attach sound to video
* implement a magnetic snap-to for attaching clips seamless after each other
* implement a splicing/sliding/shuffling mode in the gui
* provide a output wiring tool in the GUI
* provide macro features for this and that....
. (hopefully I made clear by now ''why'' I don't want to take the conventional
approach)
approach)
Rationale
~~~~~~~~~
@ -161,23 +161,23 @@ the use of analogue hardware, especially multitrack tape machines. This
conventional approach constantly creates practical problems, which could be
avoided by using the placement concept. This is due to the fact, that the
placement concept follows the natural relations of the involved concepts, while
the conventional approach was dictated by technological limitations.
the conventional approach was dictated by technological limitations.
* the ususal layering based on tracks constantly forces the user to place
clips in a unnatural and unrelated fashion and tear apart clips which belong
closely together
closely together
* the conventional approach of having a fixed "pan control" in specialized
"audio tracks" constantly hinders the development of more natural and
convincing sound mixing. It favors a single sound system (intensity based
stereophony) for no good reason.
stereophony) for no good reason.
* handling of stereoscopic (3D) video/film is notoriously difficult within the
conventional, hard wired approach
conventional, hard wired approach
* building more elaborate sound scapes and sound design is notoriously
difficult to maintain, because the user is forced to use hidden "side
chains", magic rules and re-build details in external applications, because
of the lack of flexible integration of control data alongside with the main
data.
data.
The high-level model is close to the problem domain, it should provide means to
express the (naturally complex) relationships between media objects. Using an
abstract and unified concept is always better then having a bunch of seemingly
@ -187,42 +187,40 @@ which well fits into the problem domain. Finally, there is sort-of a visionary
aspect involved here: Ichthyo thinks that nowadays, after image and sound are
no longer bound to physical media, there is potential for new workflows to be
discovered, and the Placement concept could be an extension point for such
undertakings.
undertakings.
Comments
--------
Placement Metaphor
~~~~~~~~~~~~~~~~~~
Re:
"Finally, there is sort-of a visionary aspect involved here:
Re:
"Finally, there is sort-of a visionary aspect involved here:
Ichthyo thinks that nowadays, after image and sound are no longer bound to
physical media, there is potential for '''new workflows''' to be
'''discovered''',
and the '''Placement concept''' '''''could be''''' an '''extension point''' for
such undertakings."
'''discovered''', and the '''Placement concept''' '''''could be''''' an
'''extension point''' for such undertakings."
New workflows will not just be '''discovered''', but they will be able to be
'''recorded, analysed, templated, automated, and integrated''' into the full
workflow process.
This will free up a greater proportion of time for the "finishing" processes of
projects.
workflow process. This will free up a greater proportion of time for the
"finishing" processes of projects.
"The Placement concept 'could be' an extension for such undertakings" is very
likely to be an understatement as it is this which '''''will be''''' what
makes these undertakings possible, because it enables the gathering, use, and
decision rules based on these parameters.
decision rules based on these parameters.
This feature/capability is likely to stamp the Lumiera project as a flagship
benchmark in more ways than one, for some time.
. --link:Tree[][[DateTime(2008-08-23T12:54:00NZ)]].
Back to link:Lumiera/DesignProcess[]
benchmark in more ways than one, for some time.
. --link:Tree[][[DateTime(2008-08-23T12:54:00NZ)]].
Back to link:Lumiera/DesignProcess[]

View file

@ -7,8 +7,8 @@
Render Optimizer
----------------
Render only parts of a frame which are necessary for the Output;
Optimize render pipeline for efficiency
Render only parts of a frame which are necessary for the Output; Optimize
render pipeline for efficiency
Description
@ -68,8 +68,8 @@ Possible classification for video filters:
Filters of type 1 and type 2 never use any previous frames, and are strictly
one frame in - one frame out. Filters of type 1 can always be swapped with
filters of type 2, the output
is the same. All other filters cannot be swapped in general.
filters of type 2, the output is the same. All other filters cannot be swapped
in general.
The good news is, that:

View file

@ -14,8 +14,8 @@ Resource Management: Budgeting
******************************************************************************
The Profiler will give some Idea about how much Resources can me used to
optimally utilize the system. Knowing this number leads to the next challenge,
distributing the resources to different subsystems, jobs and objects.
I here introduce a budgeting system which takes care for this.
distributing the resources to different subsystems, jobs and objects. I here
introduce a budgeting system which takes care for this.
******************************************************************************

View file

@ -13,9 +13,9 @@ Resource Management: Profiling
[abstract]
******************************************************************************
From the beginning on we planned some kind of 'profiling' to adapt dynamically
to workload and machine capabilities. I describe here how statistic data
can be gathered in a generic way. This will later work together with other
components tuning the system automatically.
to workload and machine capabilities. I describe here how statistic data can be
gathered in a generic way. This will later work together with other components
tuning the system automatically.
******************************************************************************
@ -27,18 +27,17 @@ I just introduce some ideas about the planned profiling framework here, nothing
is defined/matured yet this is certainly subject for futher discussion and
refinement.
.Requirements/Evaluation
generic::
.Requirements/Evaluation generic::
Profiling should be sufficiently abstracted to have a single set of
datastructures and algorithms to work on a broad range of subjects being
profiled. Moreover the profiling core just offers unitless counters,
semantic will be added on top of that on a higher level.
datastructures and algorithms to work on a broad range of subjects
being profiled. Moreover the profiling core just offers unitless
counters, semantic will be added on top of that on a higher level.
least possible overhead::
Profiling itself must not cost much, it must not block and should avoid
expensive operations. Simple integer arithmetic without divisions
is suggested.
expensive operations. Simple integer arithmetic without divisions is
suggested.
accurate::
We may sample data on in stochastic way to reduce the overhead,
nevertheless data which gets sampled must be accurately stored and
@ -46,22 +45,20 @@ refinement.
transient values::
It's quite common that some values can be far off either in maximum or
in
minimum direction, the system should adapt to this and recover from such
false alarms. Workload also changes over time we need to find some way
to
measure the current/recent workload an grand total over the whole
application runtime is rather uninteresting. While it is also important
that we adapt slow enough not to get into some osccilating cycle.
in minimum direction, the system should adapt to this and recover from
such false alarms. Workload also changes over time we need to find some
way to measure the current/recent workload an grand total over the
whole application runtime is rather uninteresting. While it is also
important that we adapt slow enough not to get into some osccilating
cycle.
active or passive system::
Profiling can be only passive collecting data and let it be analyzed
by some other component or active triggering some action when some
limits
Profiling can be only passive collecting data and let it be analyzed by
some other component or active triggering some action when some limits
are reached. I am yet a bit undecided and keep it open for both.
@ -77,8 +74,8 @@ struct profile
ProfileVTable vtable;
/*
Using trylock for sampling makes it never contend on the lock
but some samples are lost. Should be ok.
Using trylock for sampling makes it never contend on the lock but some
samples are lost. Should be ok.
*/
mutex_t lock; /* with trylock? */

View file

@ -11,12 +11,10 @@ Roadmap up to Lumiera 1.0
-------------------------
As the very basic architecture questions seem to settle down now, it seems to
be time
to create a first Roadmap skeleton for the project. A specific approach is
proposed:
we should define criteria allowing us to judge when we've reached a certain
level
plus we should define features to be ''excluded'' at a certain level. We should
be time to create a first Roadmap skeleton for the project. A specific approach
is proposed: we should define criteria allowing us to judge when we've reached
a certain level plus we should define features to be ''excluded'' at a certain
level. We should
''not'' define ''Features'' to go into a certain level.
''the following text is copied from the Lumiera
@ -30,16 +28,14 @@ Description: Milestones up to first Release
Milestone integration: cooperating parts to render output
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For this milestone to be reached, the basic subsystems of Lumiera need to be
designed, the most important interfaces between
the parts of the application exist in a first usable version, and all the
facilities on the rendering code path are provided
at least in a dummy version and are '''capable of cooperating to create
output'''. Based on Lumiera's design, this also means
that the basic frame cache in the backend is working. And it means that a media
asset and a clip can be added to the internal
session representation, which is then handed over to the builder. Probably it's
a good idea to include basic playback/display
of the rendered frames within the GUI while they are created.
designed, the most important interfaces between the parts of the application
exist in a first usable version, and all the facilities on the rendering code
path are provided at least in a dummy version and are '''capable of cooperating
to create output'''. Based on Lumiera's design, this also means that the basic
frame cache in the backend is working. And it means that a media asset and a
clip can be added to the internal session representation, which is then handed
over to the builder. Probably it's a good idea to include basic
playback/display of the rendered frames within the GUI while they are created.
Notable features ''not'' included
@ -54,17 +50,14 @@ Notable features ''not'' included
Milestone alpha: operations accessible for users
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For this milestone to be reached, the fundamental operations you'd expect from
a video editing software
can be '''accessed by a user''' (not a developer!). This means that the basic
distribution/release model
is set up, a ''user'' is able to compile Lumiera or install an existing
package. Moreover a user should
a video editing software can be '''accessed by a user''' (not a developer!).
This means that the basic distribution/release model is set up, a ''user'' is
able to compile Lumiera or install an existing package. Moreover a user should
be able to create/open a session file (without any quirks), add some media
(probably only a limited number
of media types will be supported), and then perform the most basic operations
like positioning, trimming,
copying, playing and finally rendering the timeline integration phase is closed
and Lumiera has reached alpha level.
(probably only a limited number of media types will be supported), and then
perform the most basic operations like positioning, trimming, copying, playing
and finally rendering the timeline integration phase is closed and Lumiera has
reached alpha level.
Notable features ''not'' included
@ -83,24 +76,18 @@ Notable features ''not'' included
Milestone beta: usable for real work
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For this milestone to be reached, users should be able to '''get real work done
with Lumiera'''. Especially,
a basic asset management should be in place, Lumiera should be able to handle
the most common media types,
the user should be able to do common editing tasks (adding, trimming, rolling,
splicing copying, moving)
both by direct manipulation within the timeline, as by using the conventional
two-viewer setup with
in/out points. Moreover, it should be possible to attach effects (probably
still just some limited kinds
of effects), apply simple transitions and control the layering and overlay mode
on output. Similarily,
the elementary routing capabilities and the handling of multiple sequences
should be suported (probably
still with limitations). The framework for automation handling should be in
place, while there may be
still limitations on automation/keyframe editing. Having about this feature set
indicates, that Lumiera
entered the beta phase.
with Lumiera'''. Especially, a basic asset management should be in place,
Lumiera should be able to handle the most common media types, the user should
be able to do common editing tasks (adding, trimming, rolling, splicing
copying, moving) both by direct manipulation within the timeline, as by using
the conventional two-viewer setup with in/out points. Moreover, it should be
possible to attach effects (probably still just some limited kinds of effects),
apply simple transitions and control the layering and overlay mode on output.
Similarily, the elementary routing capabilities and the handling of multiple
sequences should be suported (probably still with limitations). The framework
for automation handling should be in place, while there may be still
limitations on automation/keyframe editing. Having about this feature set
indicates, that Lumiera entered the beta phase.
Notable features ''not'' included
@ -121,15 +108,12 @@ Notable features ''not'' included
Milestone release-1.0: usable for productions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For this milestone to be reached, Lumiera should be a '''reliable tool for
productions with a deadline'''.
Lumiera 1.0 is not the ''dream machine,'' but users should be able to do simple
productions. We should be
able to promote Lumiera to professionals without remorse. The GUI should be
mature, undo/recovery should
work airtight, performance should be ok-ish and output quality without any
glitches. Plugin authors
can rely on stable interfaces and backwards compatibility from now on, up to
release 2.0
productions with a deadline'''. Lumiera 1.0 is not the ''dream machine,'' but
users should be able to do simple productions. We should be able to promote
Lumiera to professionals without remorse. The GUI should be mature,
undo/recovery should work airtight, performance should be ok-ish and output
quality without any glitches. Plugin authors can rely on stable interfaces and
backwards compatibility from now on, up to release 2.0
Notable features ''not'' included
@ -177,19 +161,16 @@ Rationale
~~~~~~~~~
We deliberately don't set any date schedule. Releases happen ''when they are
ready.'' We may decide to do sprints on
a short-term timeframe, but it doesn't help promising things we can't calculate
for sure. In an commercial setup, you
ready.'' We may decide to do sprints on a short-term timeframe, but it doesn't
help promising things we can't calculate for sure. In an commercial setup, you
have to commit to features and dates, but you also control a certain budget,
which gives you the means to ''make things
happen.'' In Open Source development, we've to be patient and wait for the
things to happen ;-)
which gives you the means to ''make things happen.'' In Open Source
development, we've to be patient and wait for the things to happen ;-)
Thus the proposal is to set up just a very coarse and almost self-evident
roadmap skeleton, but to discuss and define
criteria up-front, which allow us to determine when we've actually reached a
given level. Moreover, the proposal is
to add a list of features which can be savely ''excluded'' from the given
roadmap skeleton, but to discuss and define criteria up-front, which allow us
to determine when we've actually reached a given level. Moreover, the proposal
is to add a list of features which can be savely ''excluded'' from the given
milestone
@ -210,7 +191,7 @@ things are deliberately left out.
In ticket #4 (debian packaging) i explained that packaging might be optional
for 'alpha' and should be moved to 'beta'.
-- link:ct[] 2009-02-01
OK, we should make the packaging optional. I think, for alpha the criterion is
"accessability for users". If compiling remains so easy as it is now (compared
with other media related projects), than this shouldn't be a barrier.

View file

@ -57,7 +57,7 @@ Levels of classification
^^^^^^^^^^^^^^^^^^^^^^^^
The description/classification of streams is structured into several levels. A
complete stream type (implemented by a stream type descriptor) containts a tag
or selection regarding each of these levels.
or selection regarding each of these levels.
* Each media belongs to a fundamental *kind of media*, examples being _Video,
Image, Audio, MIDI, Text,..._ This is a simple Enum.

View file

@ -21,85 +21,87 @@ Description
-----------
//description: add a detailed description:
By default in POSIX signals are send to whatever thread is running and handled
there. This is quite unfortunate because a thread might be in some time
constrained situation, hold some locks or have some special priority. The common
way to handle this is blocking (most) signals in all threads except having one
dedicated signal handling thread. Moreover it makes sense that the initial
thread does this signal handling.
By default in POSIX signals are send to whatever thread is running and handled
there. This is quite unfortunate because a thread might be in some time
constrained situation, hold some locks or have some special priority. The
common way to handle this is blocking (most) signals in all threads except
having one dedicated signal handling thread. Moreover it makes sense that the
initial thread does this signal handling.
For Lumiera I propose to follow this practice and extend it a little by
For Lumiera I propose to follow this practice and extend it a little by
dedicating the initial thread to some management tasks. These are:
* signal handling, see below.
* resource management (resource-collector), waiting on a condition variable or
* resource management (resource-collector), waiting on a condition variable or
message queue to execute actions.
* watchdog for threads, not being part of the application schedulers but waking
up periodically (infrequently, every so many seconds) and check if any thread
got stuck (threads.h defines a deadline api which threads may use). We may
add some flag to threads defining what to do with a given thread when it got
stuck (emergency shutdown or just cancel the thread). Generally threads
should not get stuck but we have to be prepared against rogue plugins and
programming errors.
* watchdog for threads, not being part of the application schedulers but
waking up periodically (infrequently, every so many seconds) and check if
any thread got stuck (threads.h defines a deadline api which threads may
use). We may add some flag to threads defining what to do with a given
thread when it got stuck (emergency shutdown or just cancel the thread).
Generally threads should not get stuck but we have to be prepared against
rogue plugins and programming errors.
.Signals which need to be handled
This are mostly proposals about how the application shall react on signals and
This are mostly proposals about how the application shall react on signals and
comments about possible signals.
SIGTERM::
Send on computer shutdown to all running apps. When running with GUI but
we likely lost the Xserver connection before, this needs to be handled
from the GUI. Nevertheless in any case (most importantly when running
headless) we should do a fast application shutdown, no data/work should
go lost, a checkpoint in the log is created. Some caveat might be that
Lumiera has to sync a lot of data to disk. This means that usual
timeouts from SIGTERM to SIGKILL as in nomal shutdown might be not
sufficient, there is nothing we can do there. The user has to configure
his system to extend this timeouts (alternative: see SIGUSR below).
Send on computer shutdown to all running apps. When running with GUI
but we likely lost the Xserver connection before, this needs to be
handled from the GUI. Nevertheless in any case (most importantly when
running headless) we should do a fast application shutdown, no
data/work should go lost, a checkpoint in the log is created. Some
caveat might be that Lumiera has to sync a lot of data to disk. This
means that usual timeouts from SIGTERM to SIGKILL as in nomal shutdown
might be not sufficient, there is nothing we can do there. The user has
to configure his system to extend this timeouts (alternative: see
SIGUSR below).
SIGINT::
This is the CTRL-C case from terminal, in most cases this means that a
user wants to break the application immediately. We trigger an emergency
shutdown. Recents actions are be logged already, so no work gets lost,
but no checkpoint in the log gets created so one has to explicitly
recover the interrupted state.
This is the CTRL-C case from terminal, in most cases this means that a
user wants to break the application immediately. We trigger an
emergency shutdown. Recents actions are be logged already, so no work
gets lost, but no checkpoint in the log gets created so one has to
explicitly recover the interrupted state.
SIGBUS::
Will be raised by I/O errors in mapped memory. This is a kindof
exceptional signal which might be handled in induvidual threads. When
the cause of the error is traceable then the job/thread worked on this
data goes into a errorneous mode, else we can only do a emergency
shutdown.
Will be raised by I/O errors in mapped memory. This is a kindof
exceptional signal which might be handled in induvidual threads. When
the cause of the error is traceable then the job/thread worked on this
data goes into a errorneous mode, else we can only do a emergency
shutdown.
SIGFPE::
Floating point exception, divison by zero or something similar. Might be
allowed to be handled by each thread. In the global handler we may just
ignore it or do an emergency shutdown. tbd.
Floating point exception, divison by zero or something similar. Might
be allowed to be handled by each thread. In the global handler we may
just ignore it or do an emergency shutdown. tbd.
SIGHUP::
For daemons this signal is usually used to re-read configuration data.
We shall do so too when running headless. When running with GUI this
might be either act like SIGTERM or SIGINT. possibly this can be
configureable.
For daemons this signal is usually used to re-read configuration data.
We shall do so too when running headless. When running with GUI this
might be either act like SIGTERM or SIGINT. possibly this can be
configureable.
SIGSEGV::
Should not be handled, at the time a SEGV appears we are in a undefined
state and anything we do may make things worse.
Should not be handled, at the time a SEGV appears we are in a undefined
state and anything we do may make things worse.
SIGUSR1::
First user defined signal. Sync all data to disk, generate a checkpoint.
The application may block until this is completed. This can be used in
preparation of a shutdown or periodically to create some safe-points.
First user defined signal. Sync all data to disk, generate a
checkpoint. The application may block until this is completed. This can
be used in preparation of a shutdown or periodically to create some
safe-points.
SIGUSR2::
Second user defined signal. Produce diagnostics, to terminal and file.
Second user defined signal. Produce diagnostics, to terminal and file.
SIGXCPU::
CPU time limit exceeded. Emergency Shutdown.
CPU time limit exceeded. Emergency Shutdown.
SIGXFSZ::
File size limit exceeded. Emergency Shutdown.
File size limit exceeded. Emergency Shutdown.
Tasks
@ -107,7 +109,7 @@ Tasks
// List what would need to be done to implement this Proposal in a few words:
// * item ...
We have appstate::maybeWait() which already does such a loop. It needs to be
We have appstate::maybeWait() which already does such a loop. It needs to be
extended by the proposed things above.
@ -134,7 +136,7 @@ Rationale
---------
//rationale: Describe why it should be done *this* way:
This is rather common practice. I describe this here for Documentation purposes
This is rather common practice. I describe this here for Documentation purposes
and to point out which details are not yet covered.
//Conclusion

View file

@ -22,7 +22,7 @@ Definitions
Project:: the top-level context in which all edit work is done over an
extended period of time. The Project can be saved and re-opened. It is
comprised of the collection of all things the user is working on, it contains
all informations, assets, state and objects to be edited.
all informations, assets, state and objects to be edited.
Session:: the current in-memory representation of the Project when opened
within an instance of Lumiera. This is an implementation-internal term. For

View file

@ -57,7 +57,8 @@ You build up a simple linear cut sequence. Either by
of) unwanted parts
- playing source media and spilling over (insert, overwrite) some parts into
the final assembly
- dragging over pre-organised clips from clip folders to build up the assembly.
- dragging over pre-organised clips from clip folders to build up the
assembly.
Sound is either used immediately as-is (the soundtrack attached to the media),
or there is a similarly simple, linear music bed. Some people prefer to switch
@ -72,9 +73,9 @@ Without the intention to rework it from scratch, an already existing simple
assembly is augmented, beautified and polished, maybe to conform with
professional standards. This includes the "rescue" of a somewhat questionable
assembly by repairing localized technical problems, but also shortening and
re-arranging, and in extreme cases even changing the narrative structure.
A distinctive property of this usage scenario is that work happens rather in
the context of ''tasks'' (passes) -- not so much isolated operations:
re-arranging, and in extreme cases even changing the narrative structure. A
distinctive property of this usage scenario is that work happens rather in the
context of ''tasks'' (passes) -- not so much isolated operations:
- the task may be to get the rhythm or overall tempo right, and thus you go
over the sequence and do trim, roll, shuffle or slide edits.
@ -201,11 +202,11 @@ Scenario (8) : Script driven
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The application is started ''headless'' (without GUI) and controlled via an
API. Either an existing session is loaded, or a new session is
created and populated. Then, some operations have to be done in a systematic
manner, requiring a way to address parts of the session both unambiguously and
in a way easy to access and control from a programming environment (you can't
just ''see'' the right clip, it needs to be tagged). Finally, there might be an
API. Either an existing session is loaded, or a new session is created and
populated. Then, some operations have to be done in a systematic manner,
requiring a way to address parts of the session both unambiguously and in a way
easy to access and control from a programming environment (you can't just
''see'' the right clip, it needs to be tagged). Finally, there might be an
export or render step. A variation of this scenario is the automatic extraction
of some informations from an existing project.
@ -285,11 +286,9 @@ Template e.g. for regular TV series
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Constraints to fit all contents within fixed timeline, cover topic, select
collage of iconic scenes from archived and collected footage.
Update intro and credit roll for each episode.
Add in stopmotion, and 3D model animations with vocal commentaries.
Gather together separate items from "outworkers".
Tree
collage of iconic scenes from archived and collected footage. Update intro and
credit roll for each episode. Add in stopmotion, and 3D model animations with
vocal commentaries. Gather together separate items from "outworkers". Tree
(@)SIG(@)
Back to link:Lumiera/DesignProcess[]