Import old DesignProcess into rfc_pending

This commit is contained in:
Christian Thaeter 2010-07-23 18:24:30 +02:00
parent 64ad648eac
commit d1d3461e5d
50 changed files with 4475 additions and 0 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

View file

@ -0,0 +1,151 @@
Design Process : All Plugin Interfaces Are C
============================================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2007-06-29_
*Proposed by* link:ct[]
-------------------------------------
C interfaces
------------
When we offer interfaces for plugging in external code, we export them as C interfaces.
Description
~~~~~~~~~~~
Lumiera will be based on a plugin architecture, the core is just a skeleton with very few components. Everything else is loaded at runtime as _plugin_. C++ interfaces are hard to use by other programming languages. Thus I propose to export every interface between these plugins as C interface which can much more easily integrated in other languages.
.Further notes:
* dynamic loading of plugins, maybe unloading
* proper interface versioning
Implementation Proposal
^^^^^^^^^^^^^^^^^^^^^^^
* keep the interface in a C struct (POD).
* the interface is versioned
* first member is a _size_ which will be initialized by the actual implementation
* followed by function pointers defining the interface, see: link:Lumiera/DesignProcess/CCodingStyleGuide[]
* everything added is considered immutable for this interface version
* new functions are added to the end (thus changing size)
* function pointers must be implemented, never be NULL
* a given interface version can be extended (but nothing removed)
* code using an interface just needs to check once if the size he get from a supplier is >= what it expects, then the interface suffices its requirements
* functions are versioned too, old version might be superseded by newer ones, but the interface still needs to provide backward compatibility functions
* when old function are really deprecated, the interface version is bumped and the old functions are removed from the struct
Example define the interface structure:
[source,C]
----
struct lumiera_plugin_audio_interface_1
{
size_t size;
// Now the prototypes for the interface
AudioSample (*sample_normalize_limit_1)(AudioSample self, int limit);
unsigned (*sample_rate_1) (AudioSample self);
AudioSample (*sample_set_rate_1) (AudioSample self, unsigned rate);
// a later version might take a double as limit
AudioSample (*sample_normalize_limit_2)(AudioSample self, double limit);
}
----
Example how a plugin 'thiseffect' initializes the struct:
[source,C]
----
struct lumiera_plugin_audio_interface_1 lumiera_plugin_audio_thiseffect_interface =
{
// maybe we want to initialize size somewhat smarter
sizeof(struct lumiera_plugin_audio_interface_1),
// this are the actual functions implemented in 'thiseffect'
lumiera_plugin_audio_thiseffect_sample_normalize_limit_1,
lumiera_plugin_audio_thiseffect_sample_rate_1,
lumiera_plugin_audio_thiseffect_sample_set_rate_1,
lumiera_plugin_audio_thiseffect_sample_normalize_limit_2
}
----
Example how it will be used (schematic code):
[source,C]
----
int main()
{
// get the function vector
void* thiseffecthandle = dlopen("thiseffect.so");
struct lumiera_plugin_audio_interface_1* thiseffect =
dlsym(thiseffecthandle, "lumiera_plugin_audio_thiseffect_interface");
// call a function
thiseffect.sample_normalize_limit_1 (somesample, 1);
}
----
Further notes:
That above gives only a idea, the datastructure needs to be extended with some more information (reference counter?). The first few functions should be common for all interfaces (init, destroy, ...). Opening and initialization will be handled more high level than in the example above. Some macros which make versioning simpler and so on.
Tasks
^^^^^
* Write some support macros when we know how to do it
Pros
^^^^
* Easier usage/extension from other languages.
Cons
^^^^
* Adds some constraints on possible interfaces, the glue code has to be maintained.
Alternatives
^^^^^^^^^^^^
* Just only use C++
* Maybe SWIG?
* Implement lumiera in C instead C++
Rationale
~~~~~~~~~
Not sure yet, maybe someone has a better idea.
Comments
--------
After a talk on IRC ichthyo and me agreed on making lumiera a multi language project where each part can be written in the language which will fit it best. Language purists might disagree on such a mix, but I believe the benefits outweigh the drawbacks.
-- link:ct[] [[DateTime(2007-07-03T05:51:06Z)]]
C is the only viable choice here. Perhaps some sort of "test bench" could be designed to rigorously test plugins for any problems which may cause Lumiera to become unstable (memory leaks etc).
-- link:Deltafire[] [[DateTime(2007-07-03T12:17:09Z)]]
after a talk on irc, we decided to do it this way, further work will be documented in the repository (tiddlywiki/source)
-- link:ct[] [[DateTime(2007-07-11T13:10:07Z)]]
''''
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,136 @@
Design Process : Application Structure
======================================
[grid="all"]]
`------------`----------------------
*State* _Dropped_
*Date* _2008-11-05_
*Proposed by* link:ct[]
------------------------------------
Application Structure
---------------------
Here I am going to propose some more refined structure of the application and its components.
Description
~~~~~~~~~~~
So far we came up with a simplified BACKEND/PROC/GUI structure where each of this entities defines its own sub subcomponents. We agreed to glue that all together with some portable versioned interfaces system, but details where not laid out yet. At the time of this writing the interface system and plugin loader are reasonable finished to be usable (some small refinements to do). We recently discussed some details on IRC on how to engage this without a definitive decision. The topic of this proposal is to make a detailed description towards how the application components being glued together.
In the discussion mentioned above we concluded that we want a 'lumiera' binary which in turn loads the optional parts as plugins. There was no consent what this parts are actually be, except that the GUI should be optional for headless operation. I suggested to make as much as possible pluginable to make it easier to validate our interfaces and try different things out.
Now I introduce 'lumiera' here, this will become a new component in ./src/lumiera being the driver application for bootstraping all the rest:
Then our application structure looks somewhat like (please refine):
* the 'lumiera' loader
- commandline handling
- interface & plugin system
- session manager core
- configuration system
- lua scripting
* backend
- file and io handling
- caches
- streams
- threads
- scheduler
* proc
- asset management
- config rules system
- builder
- render graph management
* gui
- timelines
- viewers
- resources
- preferences
- ...
Furthermore the interface&plugin system is flexible enough to provide things independently of their origin (if it is build in or a plugin/dynamic library). So deployment (where to link these things) is secondary.
'lumiera' will then be the executable the user starts up, what exactly gets initialized and booted up is then matter
of configuration and commmandline options (and maybe lua scripting?).
Tasks
^^^^^
* create the 'lumiera' directory
- setup the build system
- move config, plugin and interfaces therein
- lua support can be done later
* write the main() part of the application
- start config system
- parse commandline opts
* librificate all other components (backend, proc gui)
- define their lumiera interfaces
- decide if they shall be statically linked, becoming shared libs or plugins
This are rather distributed tasks, after the 'lumiera' being set up, all other components have to be adapted to be loadable from it.
Pros
^^^^
* flexible plugin based architecture
- later: loads only things which are necessary for a given task
* very fast startup
* things which cant be used on a given environment can be left out (no gui on a headless system, no $DISPLAY set)
* inter dependencies between interfaces and plugins are automatically tracked.
Cons
^^^^
Ichthyo raised concerns that this kind of flexibility might attract other people to write things which are not in our intention and break future design and compatibility. We need to carefully document and define interfaces that people don't abuse those!
Alternatives
^^^^^^^^^^^^
We discussed the startup/main() through the GUI as it is currently done, it would be also possible to produce some more
executables (lumigui, luminode, lumiserver, ....). But I think we agreed that a common loader is the best way to go.
Rationale
~~~~~~~~~
I just think this is the best way to ensure a enduring design even for future changes we can not forsee yet.
Comments
--------
We discussed this issue lately on IRC and I got the feeling we pretty much agreed on it.
* we don't want to build a bunch of specialized executables, rather we build one core app which pulls up optional parts after parsing the config
* we change the GUI to be loaded via the module/interfaces system
From reading the above text, this proposal seems to capture that. But I am somewhat unsure if the purpose of this proposal isn't rather to load just a micro kernel and the pull up components according to configuration. Because I wouldn't accept such an architecture, and I clearly stated so right at the beginning of our project. I accepted a very flexible and language neutral plugin system on the condition the core remains in control, stays ''reasonable'' monolithic and componentization doesn't handicap us in creating an architecture based on abstractions and exploiting the proven design patterns.
It has that flexibility, yes. But that means not that we have to abuse it in any way. The main() there and thus the bootstrap of the application is
under our tight control, if we want to reject scriptable/highly configurable bootstrapping there then we can just do so. Thats more a social than a
technical decision. I personally don't like if a design is 'nannying' and puts too much constraints into unforeseen areas. If the computer can do some
task better than we, it shall do it. This still means that I want to stay very much in control, it should only do some tedious, error-prone managing
tasks for me. For example the interfaces system already tracks inter-dependencies between plugins and interfaces automatically, without the programmer
needs to care or define anything. The interface system gets it right and we wont need to care for the order initialization. I added that because I
consider such as absolutely important for plugins which might be supplied by third parties where we have no control over. But I now realized that we can
nicely use that for our own internal things too. Imo thats some very valuable service.
-- link:ct[] [[DateTime(2008-11-08T06:26:18Z)]]
Some further minor details: We didn't finish the discussion about namespaces on the last meeting. (I know I still have to write up a proposal showing the two or three alternatives I see regarding namespace organisation). But probably, "lumiera::" will be our top level interface namespace and then probably the lumiera directory will be taken by that. I see no problem also putting some startup facilities in there, but generally, it shouldn't contain implementation code, only headers and abstract classes. If that's going to become a problem, we should consider to use a separate package for the startup, e.g. "src/boot".
Another point is, you need not write a main, because there is already one. Please have a look at it, especially with regards to the [wiki:self:../GlobalInitialization global initialisation]. Further, last year I've investigated boost::program_options and think it's fine. I use it for my test class runner since then. I don't think there is any reason why we should bother with parsing options (most config is pulled up from the session). I don't think we get much program options, maybe something to set a GUI skin. Moreover, I've written last year a thin wrapper around the commandline and integrated it with the boost options parser such that user code can receive the remaining options as a vector of std::strings. Please have a look at http://git.lumiera.org/gitweb?p=LUMIERA;a=blob;f=tests/common/mainsuite.cpp;h=455bfd98effd0b7dbe6597f712a1bdfa35232308;hb=HEAD[the test class runner main] for an usage example. I really want our Lumiera main to be clean and expressive in the way showed there.
Probably the most important part of the startup is pulling up the session core; because of that I think most of the startup process falls into the realm of the Proc-Layer. Within Proc, I don't want any significant string manipulations done with C-strings and I don't want raw arrays when we can use std::vector.
-- link:Ichthyostega[] [[DateTime(2008-11-06T19:28:13Z)]]
I 'dropped' this now because we do it somewhat differently now and I dont want to document this here :P
-- link:ct[] [[DateTime(2009-02-03T17:28:28Z)]]
''''
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,51 @@
Architecure Overview
====================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2008-03-06_
*Proposed by* link:Ichthyostega[]
-------------------------------------
Architecure Overview
--------------------
This proposal intends to capture envisioned Architecture of the Application.
See the SVG drawing http://www.lumiera.org/gitweb?p=LUMIERA;a=blob_plain;f=doc/devel/draw/Lumi.Architecture-1.svg;hb=HEAD[Overview of Lumiera Architecture)] maintained in GIT
Description
~~~~~~~~~~~
* the Application has three Layers: Backend, Proc and GUI
* the Application shall be completely functional without GUI (script-driven)
* all IO, media data fetching, processing and bookkeeping falls within the realm of the Backend
* all media object manipulation, deciding and configuration is the Proc Layer's job
* extensible by plugins on all levels, highly configurable, but not totally componentized (micro kernel) architecture
* strong separation between high-level and low-level areas of the Application
* the user/GUI manipulates a high-level model whereas rendering is based on a corresponding low-level model
* stored Session (state) is comprised of high-level model, a collection of Assets and accompaning configuration
* (possibly) several storage backends, abstracted out by a common interface
Comments
--------
* Alcarinque made http://telniratha.atspace.com/ui_architecture.jpg[some drafts] for the ui. Here is the http://telniratha.atspace.com/ui_architecture.odg[oodraw document]. This is not a technical draft at all, it is just an idea.
* Wouldn't the Config Rules (embedded Prolog) also interact with the High Level Model? Or would that be expanding its scope too much? I imagine default/user configurable settings such as explicit !LocatingPin placement vs Relative !LocatingPin placement. For example, in an AMV, or any music video actually, the positioning of clips should be always relative against the audio/music. However, if you are editing a scene in a movie, you want the next scene to appear relative to the last scene played. In the first, you want to keep the scenes always synced up against the audio, while in the latter, you just want the scenes to appear one after another.
--- link:PercivalTiglao[] [[DateTime(2008-07-16T05:32:45Z)]]
* Yes, indeed, that is what I am planning. The drawing above just doesn't show every connection. The interaction between high-level model and rules system mostly goes via the "session defaults", which are facts ''and'' rules. Thus, in your example, the user would just use the "default placement". My Intention was to use '''tags''' to quite some extent. The user would be able to tag the source footage, and then rules can kick in when a certain tag applies.
Incidentally, integrating prolog is immediately on the agenda, because first we want to flesh out the very basic system and get to work basic rendering. Until then, I use a "mock" implementation of the query/rules system, which just returns some hard wired defaults.
-- link:Ichthyostega[] [[DateTime(2008-09-04T15:38:21Z)]]
Conclusion
----------
Accepted. The drawing was moved into the GIT tree, hopefully it will be maintained in future.
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,158 @@
Design Process : C Coding Style Guide
=====================================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2007-07-03_
*Proposed by* link:ct[]
-------------------------------------
C Coding Style Guide
--------------------
I introduce here my favorite C coding style.
Description
~~~~~~~~~~~
In the following I'll explain a C coding style I used frequently for other projects. Take this as suggestion for parts written in C (it definitely makes no sense for C++). We probably don't need to enforce this style for normal C code, but for the related link:Lumiera/DesignProcess/AllPluginInterfacesAreC[] it really makes sense to have some well defined style.
Function names follow the rule:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.`namespace[\_object][\_verb[\_subjects]][\_version]`
* namespace is \'lumiera_\' appended with some subsystem tag (`lumiera_plugin_audio_`)
* object is the type of the \'this\' object we are addressing, maybe followed by the object we are returning
* verb is the action to take, (new, copy, free, set, clear,.. etc.) if omitted the action is `get`
* subjects is a descriptive list of the arguments which the action takes, this should be a human readable word describing the parameter concept, and NOT encoding a concrete type (name, age, weight; not string, int, float)
* for interfaces we may use versioning, then a number is appended to the name but we alias the actual function with a inline function or a macro without this number.
Prototypes follow the rule:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.`rettype function (Object self, ...)`
* function is the functionname as above
* rettype is sensible to what object and verb define, setters return a pointer to the set'ed element if an allocation could be involved (or NULL on failure), a int if the setter does some checks over the supplied argument (0 indicates failure, !0 success), void for procedures and actions which can never fail.
* Object is a pointer to refered object (\'this\' like C++) in rare cases (_new()) functions may be used without this self pointer, see below
* `...` are the types and names of the arguments described in `subjects` of the name.
Object variants:
^^^^^^^^^^^^^^^^
For each `struct namespace_foo_struct` we have following typedefs:
[source,C]
----
typedef struct namespace_foo_struct namespace_foo; // canonical typename
typedef const namespace_foo * const_NamespaceFoo; // pointer to const object
typedef namespace_foo* link:NamespaceFoo[]; // canonical pointer/handle
typedef namespace_foo ** link:NamespaceFoo[]_ref; // when intend to mutate the handle itself
typedef const namespace_foo ** const_NamespaceFoo_ref; // for const object handle
----
Examples:
+++++++++
.`lumiera_plugin_audio_sample_normalize_limit_1 (AudioSample self, int limit)`
* namespace is \'lumiera_plugin_audio\'
* operates on a \'sample\' object (and likely returns a pointer)
* operation is \'normalize\'
* takes one additional parameter describing the limit for normalization
* this is a version 1 interface we later define:
[source,C]
----
#define lumiera_plugin_audio_sample_normalize_limit\
lumiera_plugin_audio_sample_normalize_limit_1
----
.`lumiera_plugin_audio_sample_rate_1 (AudioSample self)`
* this would be just a getter function returning the sample rate
.`lumiera_plugin_audio_sample_set_rate_1 (AudioSample self, unsigned rate)`
* a setter, note that the 'rate' is defined after the verb
Tasks
^^^^^
Pros
^^^^
* supplements documentation, makes it even unneeded sometimes
* well defined namespace
* C language bindings without tricks
Cons
^^^^
* very long identifier names
* not completely unique
Alternatives
^^^^^^^^^^^^
* Hungarian notation isn't readable, fails semantic consistency, has renaming issues and encodes types rather than concepts. There are simpler schemes which are even more unambiguous.
Rationale
~~~~~~~~~
I am trying/using this scheme since some time now, at first it looks like overhead to encode arguments to functionnames. But the intention here is to make code easy readable and memorizeable, when one follows this scheme one does seldom need to lookup the docs about the API. In fact it sometimes even turns out that one wants to use a functionname which isn't defined in the API, which is a good indicator to add such to to the API.
This scheme is not fully unambiguous but suffices for all practical task. It encodes parameters like C++ does for overloading without strange mangling. All names are global in a well defined namespace which is very natural for C (other OO like C styles involve structs and hand written vtables, with this scheme we trampoline from this global names to vtables *only* if needed)
Conclusion
----------
Finalized on link:MonthlyMeetings[]/Protocol-2008-03-06
Comments
--------
I strongly object promoting such a thing as a general "Style Guide". It can be a help or last resort if you are forced to work with improper
tools (a situation that's rather frequent in practice though). __As such it is well chosen and practical__.
But basically, it shows several things:
* you are using a global namespace
* you deal with way to fat interfaces
* you mix deployment metadata (a version/compatibility check) with functional code
All of this indicates some design style breakage, so it would be preferable to fix the design if possible.
The only part I'd like to support as a Style Guide is the rule of using the "verb+object" pattern for
creating function names
-- link:Ichthyostega[] [[DateTime(2007-07-08T11:42:39Z)]]
Probably needs little explanation:
* you are using a global namespace
This is only about C for names which get exported, C only has a global namespace and we need some way to get unique names. The link:Lumiera/DesignProcess/AllPluginInterfacesAreC[] already uses better/smaller namespaces by defining interfaces as C structs. The full blown long names explained here are technically not needed when we use the plugin system as proposed, I just shown them here for completeness. Next, when we decide for alternative linking methods like static builds we would need to declare all "verb+object" functions static, else there is a high probability of clashes.
* you deal with way to fat interfaces
How can you tell that? This is only a nameing style. No interfaces mentioned here. I am all after small well defined specialized interfaces.
* you mix deployment metadata (a version/compatibility check) with functional code
Yes, I cant figure out how to do it better but still lightweight in C. the _version thing is something I added here after the interfaces proposal. I work on a example how this will be used in a more friendly way.
Note again that this is a "nameing system", it is intended to be very verbose and give unique declarative names. It is not about design! Design is done as usual and only when things have to be exported as C symbols (both, exported and C!!) this applies. This has zero implication for C++ code, zero implication for C functions which are not exported (while I personally still prefer this style) and finally when we do the interfaces thing like I proposed, then the naming can be much simpler, see examples there or in my repository.
-- link:ct[] [[DateTime(2007-07-10T08:03:06Z)]]
Thanks, your explanation together with the example in git made the usage pattern much more clear. I think the _version postfix is esp. helpful on the names
of the plugin interfaces (structs in C), and probably it will be a good practice, to have one such common plugin interface on every "plugin extension point",
i.e. every point in the sytem, that can be extended by plugins.
-- 217.110.94.1 [[DateTime(2007-07-10T17:23:33Z)]]
''''
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,80 @@
Design Process : Clip Cataloging System
=======================================
[grid="all"]
`------------`-----------------------
*State* _Idea_
*Date* _2008-07-26_
*Proposed by* link:JordanN[]
-------------------------------------
Clip Cataloging System
-----------------------
A system for storing, organizing, and retrieving assets, such as images and videos.
Description
~~~~~~~~~~~
Organizations that work with video, and even home users, tend to have massive collections of stock videos and images that they will need to find and use in their projects. A Linux-based system is needed to help them to organize, tag, and retrieve assets from those collections. Being able to find the clips the user needs and bring them into his timeline, will mean that the user will be able to more rapidly complete his project.
This could be implemented as a separate application, but integrated for use in a Linux-based video workflow suite, including apps like Lumiera and Blender.
Tasks
~~~~~
* Identify ways in which existing groups organize their collections.
* Determine pros / cons of each method
* Implement a solution that will be modular enough for other content creation projects to also use
Pros
~~~~
* Faster, more efficient workflow
Cons
~~~~
Not directly a part of Lumiera. If not implemented separately, could cause undue bloat.
Alternatives
~~~~~~~~~~~~
Storage-based organization. User must remember where files are, and must store them correctly. Not clip-based, so the entire video must be imported and the desired portion selected.
Rationale
~~~~~~~~~
Comments
--------
* Such is planned, but as you pointed out, this would be a rather standalone application which needs a lot of efforts to be implemented. We don't have the development power to do that now. If someone wants to work on that, please contact me. General idea is to put all kinds of resources (Footage, Clips, Effects, Subprojects, Sounds ....) into a database with then gets tagged/attributed in different ways (implicit things like 'filename', 'type', 'length'; automatic deduceable things like 'Exposure', 'Timecode', ...; And manual tags like: who was on set, location, ....). Then present this all in a *good* GUI (by default just showing filesysten like) but one can define queries on this database and the generated views will then be storeable.
Back to Lumiera, for now we will likely just use 'normal' file open dialogs until the above system becomes available.
-- link:ct[] [[DateTime(2008-07-26T08:31:42Z)]]
* Yes, it's indeed an important feature we should care for. But cehteh is right, we have more important things to do first. But feel free to target it.
* Also, we'd need integration with production support systems, for example http://celtx.com/[CELTX].
* The interface to the Lumiera App would be to populate the asset manager with the required assets
-- link:Ichthyostega[] [[DateTime(2008-07-27T22:19:38Z)]]
Videos, Audio, Clips and Resources Manager by using plugins for FOSS GPL "Library & Collections Management" programs.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The video and audio raw material, clips, etc could be managed using code that is already available in project that carry out the same tasks. For example as library managers, or media (video, audio or CD) collections, Integrated Library Systems (ILS).
Examples of a library management program ;
. Kete - http://kete.net.nz/[]
. Koha - http://www.koha.org/[]
. link:GreenStone[] - http://www.greenstone.org/[]
. Evergreen - http://open-ils.org/faq.php[]
An additional benefit to using "library" managers, is that it can handle interloans, referencing of "other" (people's/organization's) libraries, numbering systems, descriptions, and classifications, thousands to millions of items, search systems, review and comment systems, plus the benefits of open source that allow the expansion of features easily.
The use of task oriented programs in this way, makes use of established code, that has been developed by experts in their field. Any database system would be useful for managing all these media. But one that has been developed by the people that have been working with cataloging systems for a long time is likely to do well. Plus it can be readily improved, by people who do not have to know the first thing about how to design video editing programs. The program also gets improved because of it own community, which adds features or performance to Lumiera, without even having to "drive" the development.. --link:Tree[][[DateTime(2008-08-27T20:38:00NZ)]].
''''
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,93 @@
Design Process : Coding Style
=============================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2007-06-27_
*Proposed by* link:ct[]
-------------------------------------
CodingStyle
-----------
Define coding style standard which we will use.
Description
~~~~~~~~~~~
We need to agree on some coding style, imo constistency is the most important part with this, no matter which style we use.
See http://en.wikipedia.org/wiki/Indent_style[]
.Notes:
* no tabs, use spaces!
.Proposed:
* K&R by ichthyo
* compact and well known
* GNU by cehteh
* imo the best readability (albeit little strange)
* cinelerra might apply as official GNU project someday
Another question: __how to write identifiers?__
.Proposed:
* ichthyo: use link:CamelCase[], start ClassNames upper case and variableNames in lower case. Make all namespaces and package (dir) names completely lowercase
Tasks
^^^^^
* Bouml config to generate this style
* footers (or headers) to configure common editors to use this style by default
Pros
^^^^
Cons
^^^^
Alternatives
^^^^^^^^^^^^
Rationale
~~~~~~~~~
Conclusion
----------
we agreed on GNU style
-- link:ct[] [[DateTime(2007-07-03T04:04:22Z)]]
Comments
--------
Since link:ct[] called spaces instead of tabs first, we should stick to that. I think all other reasons will lead us to nowhere!
Although I'm used to a BSD/KNF-like coding style I will try the GNU one. After all, the wikipedia page mentions no disadvantages of that style :)
I just proposed K&R because it is widely accepted. Personally, I was never very fond of K&R style, I always prefered putting opening braces
to the left. I never used GNU style until now, but it looks somewhat apealing to me. (btw, ECLIPSE comes with presets for all this styles :-P ).
Anyhow, I can adapt to most any style. The only thing I really dislike is using tabs (with the exeption of database DDLs and CSound files, where
tab are actually helpful) :)
''''
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,69 @@
Design Process : Data Backend
=============================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2007-06-04_
*Proposed by* link:ct[]
-------------------------------------
DataBackend
-----------
Describe the DataBackend functionality.
Description
~~~~~~~~~~~
This just starts as braindump, I will refine it soon:
. handle all files lumiera uses at runtime (media, edl, temp data)
. manage filehandles, lumiera might use more more files than available filehandles
. manage temporary data
. do caching
. io will be blocked where the backend tells the core where it can expect the data (not read()/write() like)
. kindof garbage collector
. do prefetching
. no/low latency for the core the prefetcher and other things ensure that data is available in time
. translate any input into a format which the lumiera core understands (demux, decode)
. same for encoding to output formats
. offer a plugin API for encoders/decoders
. maybe network backend for serving data to distributed rendernodes
. can do some load control or management (trigger adaptive rendering if system is idle etc)
. pull based arch
.Notes:
* ichthyo wrote also some ideas on http://www.pipapo.org/pipawiki/Cinelerra/Developers/ichthyo/Cinelerra3/Architecture[Architecture] and a sketch/draft about http://www.pipapo.org/pipawiki/Cinelerra/Developers/ichthyo/Possibilities_at_hand[things possible in the middle layer]
Tasks
^^^^^
Pros
^^^^
Cons
^^^^
Alternatives
^^^^^^^^^^^^
Rationale
~~~~~~~~~
Comments
--------
Sounds fairly complete to me
-- link:Ichthyostega[] [[DateTime(2007-06-16T23:19:44Z)]]
Developement takes place in the repo now
-- link:ct[] [[DateTime(2007-06-27T16:14:56Z)]]
''''
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,231 @@
[grid="all"]
`------------`-----------------------
*State* _Idea_
*Date* _2008-09-21_
*Proposed by* link:nasa[]
-------------------------------------
Delectus Shot Evaluator
-----------------------
This is a brain dump about the shot evaluator subproject.
Description
~~~~~~~~~~~
Brainstorm on Delectus
~~~~~~~~~~~~~~~~~~~~~~
Some (many) of the ideas presented herein come from the various parties involved in the Lumiera discussion list and IRC channel #lumiera.
http://lists.lumiera.org/pipermail/lumiera/2008-September/000053.html[] -- the main discussion thread
Additionally, a lot of great concepts for how to streamline the interface are derived in part from link:KPhotoAlbum[].
I use tags, keywords, and metadata almost interchangeably, with the exception that metadata includes computer generated metadata as well.
These are not tags in the conventional sense -- they don't have to be text. In fact the planned support (please add more!) is:
* Text -- both simple strings (tags) and blocks
* Audio -- on the fly (recorded from the application) or pregenerated
* Video -- same as audio
* Link -- back to a Celtx or other document resource, forward to a final cut, URL, etc
* Still image -- inspiration image, on set details, etc
* ID -- such as the serial number of a camera used, the ISBN of a book to be cited, etc
As such, the tags themselves can have metadata. You can see where this is going...
Also, the tags are applied to "clips" -- which I use interchangeably between source material imported into the application and slice of that material that tags are applied to. Any section of a video or audio source can have tags applied to it.
Two key functions: assign metadata and filter by metadata.
clips are one thing; but in reality most clips are much longer than their interesting parts.
Especially for raw footage, the interesting sections of a clip can be very slim compared to
the total footage. Here is a typical workflow for selecting footage:
. Import footage.
. Remove all footage that is technically too flawed to be useful.
. Mark interesting sections of existing clips, possibly grouped into different sections.
. Mark all other footage as uninteresting.
. Repeat 3-4 as many times as desired.
Some key points:
* Import and export should be as painless and fast as possible.
* Technically flawed footage can be both manual and computer classified.
* In some cases (e.g. documentaries, dialog) audio and video clips/footage can follow different section processes.
It is possible to use video from footage with useless audio or use audio from footage with useless video.
* "Interesting" is designed to be broad and is explained below.
* steps 2-5 can be performed in parallel by numerous people and can span many different individual clips.
In simple editors like Kino or iMovie, the fundamental unit used to edit video is the clip. This is great for a large number of uses, such as home videos or quick Youtube postings, but it quickly limits the expressive power of more experienced engineers in large scale productions (which are defined for the purposes of this document to include more than 2 post-production crew members). The clip in those editors is trimmed down to include only the desired footage, and these segments are coalesced together into some sort of coherent mess.
The key to adequate expressive power is as follows:
* Well designed, fast metadata entry. Any data that can be included should by default, and ideally the metadata entry process should run no less than about 75% as fast as simple raw footage viewing. Powerful group commands that act on sections of clips and also grouping commands that recognize the differences between takes and angles (or individual mics) enhance and speed up the process.
* Good tools to classify the metadata into categories that are actually useful. Much of the metadata associated with a clip is not actively used in any part of the footage generation.
* Merging and splicing capabilities. The application should be smart enough to fill in audio if the existing source is missing. For example, in a recent project I was working on a camera op accidently set the shotgun mike to test mode, ruining about 10% of the audio for the gig. I was running sound, and luckily I had a backup copy of the main audio being recorded. This application should, when told that these two are of the same event at the same time, seamlessly overlay the backup audio over the section of the old audio that has been marked bad and not even play the bad audio. This is just background noise, and streamlining the immense task of sorting through footage needs to be simplified as much as possible.
* Connection to on site documentation and pre-production documentation. When making decisions about what material to use and how to classify it, it is essential to use any tools and resources available. The two most useful are onsite documentation (what worked/didn't work, how the weather was, pictures of the setup, etc all at the shoot) and pre-production (what the ideal scene would be, what is intended, etc). Anything else that would be useful should be supported as well.
* Be easily accessible when making the final cut. Lumiera is, if the application gets up to speed, going to serve primarily to render effects, finalize the cut, and fine tune what material best fits together. Any metadata, and certainly any clipping decisions, should be very visible in Lumiera.
* Notes, notes, notes! The application should support full multimedia notes. These differ from (4) in that they are generated during the CLASSIFICATION process, not before. This fits in with (5) as well -- Lumiera should display these notes prominently on clip previews. The main way for multiple parties to communicate and even for a single person to stay organized is to add in notes about tough decisions made and rationale, questionable sections, etc. These notes can be video, audio, text, etc from one of the clips, from the machine used to edit (such as using a webcam or microphone), or over the network (other people's input).
Too technically flawed
^^^^^^^^^^^^^^^^^^^^^^
A clip is said to be too technically flawed if it has no chance of making it to the final product whatsoever. This does not, however, preclude its use throughout the post-production process; for example, part of a clip in which the director describes his vision of the talent's facial expression in a particular scene is never going to make it into the final product, but is invaluable in classifying the scene. In this case, the most reasonable place to put the clip would be as a multimedia note referenced by all takes/angles of the scene it refers to.
As mentioned above, flawed video doesn't necessarily mean flawed audio or vice-versa.
Interesting
^^^^^^^^^^^
An "interesting" clip is one that has potential -- either as a metadata piece (multimedia note, talent briefing, etc) or footage (for the final product OR intermediary step). The main goal of the application is to find and classify interesting clips of various types as quickly as possible.
Parallel Processing
^^^^^^^^^^^^^^^^^^^
Many people, accustomed to different interfaces and work styles, should be able to work on the same project and add interactive metadata at the same time.
Classification interface
++++++++++++++++++++++++
The classification interface is divided into two categories: technical and effective. Technical classification is simply facts about a clip or part of a clip: what weather there is, who is on set, how many frames are present, the average audio level, etc. Effective classification allows the artist to express their feelings of the subjective merits (or failures) of a clip.
DCMS
^^^^
The project is organized around a distributed content management system which allows access to all existing materials at all times. Content narrowing allows for a more digestible amount of information to process, but everything is non-destructive; every change to the clip structure and layout is recorded, preferably with a reason as to why it was necessary or desired.
Content narrowing
^^^^^^^^^^^^^^^^^
With all of the information of an entire production available from a single application, information overload is easy. Content narrowing is designed to fix that by having parts of individual clips, metadata, or other files be specific to one aspect of the overall design. This allows for much more successful use of the related information and a cleaner, streamlined layout. As an example, metadata involving file size has no effect whatsoever on the vast majority of most major decisions -- the answer is almost always "whatever it takes." Thus, it would not appear most of the time. Content narrowing means that it is easy to add back footage -- "widen the view" one step, add it back, and "narrow the view" again.
Multiple cuts
^^^^^^^^^^^^^
There is no need to export a final cut from this application; it merely is the first step in the post-production chain. It is the missing link between receiving raw footage from the camera and adding the well executed scenes to the timeline. What should come out of the application is a classification of
Situational, take, and instance tagging
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is VERY powerful. The first step to using the application is to mark which scenes are the same in all source clips -- where same means that they contain sections which would both not run. This can include multiple takes, different microphones or camera angles, etc. The key to fast editing is that the application can edit metadata for the situation (what is actually going on IN THE SCENE), take (what is actually going on IN THIS SPECIFIC RUN), and instance (what is actually going on IN THIS CLIP). If editing a situation, the other referenced clips AUTOMATICALLY add metadata and relevant sections. This can be as precise and nested as desired, though rough cuts for level one editing (first watchthrough after technically well executed clips have been selected) and more accurate ones for higher levels is the recommended method.
Subtitling
^^^^^^^^^^
This came up on the discussion list for Lumiera, and it will be supported, probably as a special tag.
nasa's Laws of Tagging
^^^^^^^^^^^^^^^^^^^^^^
. There is always more variety in data than tags. There are always more situations present in the data than can be adequately expressed with any (reasonable) number of tags. This is OK. All that is needed is the minimum set of unique tags to progress to the next cycle without losing editing intent or the ability to rapidly evaluate many situations.
. Many tags are used many times. "Outdoors" will be a very, very common tag; so will "redub." If conventional names are decided upon and stuck to, it is significantly easier to map the complex interactions between different content situations.
. Avoid compound tags. Do not have "conversation_jill_joe" as a tag; use "conversation," "jill," and "joe" instead. It is very easy to search for multiple tags and very hard to link data that doesn't use overlapping tags.
The interface -- random idea
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is not meant to be a final interface design, just something I wrote up to get ideas out there.
key commands
mutt/vim-style -- much faster than using a mouse, though GUI supported. Easy to map to joystick, midi control surface, etc.
Space stop/start and tag enter
Tab (auto pause) adds metadata special
Tracks have letters within scenes -- Audio[a-z], Video[a-z], Other[a-z] (these are not limits) -- or names.
Caps lock adds notes. This is really, really fast. It works anywhere.
This means that up to 26 different overlapping metadata sections are allowed.
Prompting
Prompting for metadata is a laborious, time-consuming process. There is no truly efficient way to do it. This application uses a method similar to link:KPhotoAlbum[]. When the space key is held and a letter is pressed, the tag that corresponds to that letter is assigned to the track for the duration of the press. (If the space is pressed and no other key is pressed at the same time, it stops the track.) For example, suppose that the following mapping is present:
o = outside
x = extra
p = protagonist
c = closeup
Then holding SPACE over a section and pressing one of these keys would assign the tag to the audio AND video of the section over which the space was held. If instead just the key is pressed (without space being held), that tag is assigned to the section over which it is held. This is very fast and maps well to e.g. PS3 controller or MIDI control.
If LALT is held down instead of SPACE, the audio is effected instead. If RALT is held, just the video is effected.
In order to support scenario/take/clip tagging:
The default is situation. If the keybinding to x is:
x = t:extra ; effect only take
x = ts:extra ; effect take and scenario
x = c:extra ; extra only visible in this clip!
x = tc:extra ; this take and clip show the extra
etc
Other keyargs (the part in front of the colon) can be added to account for other uses (e.g. l = all taken on the same location).
Tab is pressed to add metadata mappings. Tab is pressed to enter metadata edit mode; this pauses video. Then press any key to map; and type the tag to associate (with space, multiple tags can be added.). The following specials are defined:
[:keyarg:]:TAG is special tagging for scenario/take/clip.
!TAG removes TAG if it is present. This is useful because it allows huge sections of the clip to be defined as a certain tag, then have parts removed later.
a:TAG applies TAG only to the audio.
v:TAG applies TAG only to the video.
p:PATH adds a link to PATH as a special tag.
(This will have a nice GUI as well, I just will always use the keyboard method so I am describing it first. Mapping configurations can be stored in a separate file, as a user config, or in the specific project.)
If ESC is pressed, all currently ranged tags are ended.
Finally, if single_quote is pressed without SPACE or {L,R}ALT down, it marks an "interesting location." Pressing SHIFT+single_quote goes to the next "interesting location" and pressing CNTRL+' goes to the previous "interesting location." This allows for very quick review of footage.
Comments
--------
Rating - Quantitative Rating as well as Qualitative Tagging
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The importance/value of the video for various factors uses, can vary through the video. It would be helpful to have the ability to create continuous ratings over the entire track. Ratings would be numerical. Automatic clip selection/suggestion could be generated by using algorithms to compute the usefulness of video based on these ratings (aswell as "boolean operations"/"binary decisions" done with tags).
The ratings could be viewed just like levels are - color coded and ovelayed on track thumbnails.
- Tree 2008-10-25
link:MultiView[] - useful for concurrent ratings input
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It would be convenient to have an ability to view the different tracks (of the same scene/time sequence) at once, so the viewer can input their ratings of the video "on the fly", including a priority parameter that helps decide which video is better than what other video.See the GUI brainstorming for a viewer widget, and key combinations that allow both right and left hand input, that could be used for raising/lowing ratings for up to six tracks at once.
- Tree 2008-10-25
I like the idea of rating clips (or rather, takes) a lot. It would be cool to include both "hard," "relative," and "fuzzy" rating. Hard is an exactly defined value (scaled 0-1) that puts the clip in an exact location in the queue. Relative means that one is higher or lower rated than another. Fuzzy is a slider which is approximate value, and there is some randomness. The best part is that these can be assigned to hardware sliders/faders. Pressure sensitive buttons + fuzzy ratings = really easy entry interface. Just hit as hard as needed! Multiple tracks at once also an astounding idea. I could image some sort of heap (think binary heap, at least for the data structure) which determines the priorities and decides which clips are played. Then the highest rated clips are played first, down to the worst.
- link:NicholasSA[] 2009-01-04
Possible Collaboration with the people from Ardour?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I guess if the thing can do all the things we talked about here, it would be perfectly suitable for sound classification too, and maybe could fill another gap in FOSS: Audio Archival Software, like this: http://www.soundminer.com/SM_Site/Home.html[] (which is very expensive)... maybe the Ardour people would be interested in a collaboration on this?
I like the suggestion of sound classification with a similar (or, even better, identical) evaluator. link:SoundMiner[] looks interesting, but like you say very expensive. I'm a sound guy, so I feel your pain...
- link:NicholasSA[] 2009-01-04
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,68 @@
[grid="all"]
`------------`-----------------------
*State* _Idea_
*Date* _2008-03-06_
*Proposed by* link:Ichthyostega[]
-------------------------------------
Design the handling of Parameters and Automation
------------------------------------------------
Parameters of Plugin Components and/or Render Nodes play a role at various levels of the application.
+
Thus it seems reasonable to do a formal requirements analysis and design prior to coding.
Description
~~~~~~~~~~~
Regarding components directly participating in the render (which may be implemented by plugins), we distinguish between *configuration* (static) and *parameters* (dynamic). The point of reference for this distinction is the render process: a plugin configuration may well be variable in some manner, e.g. the plugin may provide different flavours of the same algorithm. But this choice has to be fixed prior to feeding the corresponding plugin asset to the builder. Contrary to such fixed configuration setup, the _parameters_ are considered to be _variable_ during the rendering process. They can be changed on-the-fly from GUI, and they may be automated. Probably, each Render Node will have at least one such _parameter_ -- namely a bypass switch.
Tasks
^^^^^
* we need to work out an introspection mechanism for parameters
- asses what different types of parameters we need
- find out how much structured parameters will be (do simple values suffice?)
- define how parameters can be discovered/enumerated
- define a naming scheme for parameters, so they can be addressed unambiguously
* value parameters have a value range. Work out how to handle this
* parameters may need a specific presentation in the GUI
- linear/logarithmic scale, scale reference
- selecting the right widget
So...
. find out to which extend we need these properties
. find out what parts of the App will have what requirements?
. chose a best fitting implementation based on this information
A closely related issue is the handling of *Automation*. The current draft calls for an abstract interface "ParamProvider",
which just allows the link:Plugin/RenderComponent[] to pull a current value, without knowing if the ParamProvider is a GUI widget or
an automation data set with interpolation. The component using the param value should not need to do any interpolation.
We should re-asses and refine this draft as needed. Note: Render Nodes are stateless; this creates some tricky situations.
Alternatives
^^^^^^^^^^^^
?? (any ideas?)
Rationale
~~~~~~~~~
Comments
--------
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,81 @@
[grid="all"]
`------------`-----------------------
*State* _Idea_
*Date* _2008-03-06_
*Proposed by* link:Ichthyostega[]
-------------------------------------
Design the Render Nodes interface
---------------------------------
In the current design, the low-level model is comprised of "Render Nodes"; Proc-Layer and Backend carry out some colaboration based on this node network.
+
Three different interfaces can be identified
* the node wiring interface
* the node invocation interface
* the processing function interface
Description
~~~~~~~~~~~
Render Nodes are created and wired by the Builder in the Proc-Layer. On the other hand, the rendering process is controlled by the backend, which also provides the implementation for the individual data processing tasks. To create a result, output nodes are ''pulled'' via the invocation interface, resulting in the affected nodes to recursively pull their predecessor(s). In the course of this call sequence, the nodes activate their processing function to work on a given set of buffers. Moreover, we plan to use the render network also for gathering statistics.
'''Note''': Render Node is an internal interface used by Proc-Layer and activated by the Backend. Plugins are planned to be added via Adapter nodes. Thus the Render Node interface needs ''not'' to be exported.
the wiring interface
^^^^^^^^^^^^^^^^^^^^
This part of the design defines how nodes can be combined and wired up by the builder to form a network usable for rendering. For this purpose, the link:ProcNode[] is used as a shell / container, which is then configured by a const WiringDescriptor. Thus, the node gets to know its predecessor(s) and is preselected to use a combination of specific working modes:
* participate in caching
* calculate in-place
* source reading
* (planned) use hardware acceleration
* (planned) remote dispatched calculation
Most nodes will just have a single predecessor, but we can't limit nodes to a single input, because there are some calculation algorithms which natively need to work on several data streams simultaneously. This means, a single node can be involved into the calculations for multiple streams (several pull calls on the same frame number but for different channel, and in each case maybe a different output node). I decided to rely solely on the cache for avoiding duplicate calculations caused by this complication, because I deem it to be an corner case.
the invocation interface
^^^^^^^^^^^^^^^^^^^^^^^^
this is intended to be a rather simple "call-style" interface, without much possibilites to influence the way things are happening. You pull a node and will find the results in a provided buffer or the cache, but you can't even change the frame data type type of the result. Besides the node invocation, functions for collecting statistics will be accessible here too (Probably these functions will be ''implemented'' in a classic-OO fashion by virtual functions, but that's another story)
the processing interface
^^^^^^^^^^^^^^^^^^^^^^^^
the individual nodes are configured to call a plain-C {{{process()}}} function and provide an array of buffer pointers to be used within this function. For the purpose of invoking actual data processing, it is irrelevant if this function is implemented somewhere in the backend or provided by a plugin. At this point, no type- and other meta-information is passed, rather the processing function is supposed to do The Right Thing ^TM^
Tasks
^^^^^
* What services do we expect from Render Nodes. What do we plan to do with a render node?
* What different kinds (if any) of Render Nodes can be foreseen?
* order the required functionality by Proc / Backend. Find out specific implementation constraints.
* work out a design based on this informations
Rationale
~~~~~~~~~
The purpose of this Design Entry is to give a summary; the questions and the details of carrying out the operations are much more involved.
+
Please see the http://www.lumiera.org/wiki/renderengine.html#Rendering[Proc-Layer impl documentation (TiddlyWiki)] and the http://www.lumiera.org/gitweb?p=lumiera/ichthyo;a=blob;f=src/proc/engine/procnode.hpp;h=9cf3a2ea8c33091d0ee992ec0fc8f37bb5874d34;hb=refs/heads/proc[Source Code] for details
(and/or contact Ichthyo for in-depth discussion of those technical details)
Comments
--------
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,102 @@
Design Process : Development Framework
======================================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2007-06-08_
*Proposed by* link:ct[]
-------------------------------------
Development Framework
---------------------
Here we collect how the tree/repository will be set up and which tools are needed/required for working on lumiera.
Description
~~~~~~~~~~~
.Tools required:
* unix like shell environment with standard tools
* we don't require a specific linux distribution
* git 1.5.3 (not out yet, but really soon, we want submodules support)
* GNU toolchain, autoconf/automake (maybe scons or something else?)
* bouml (version case unresolved)
.Tools suggested:
* doxygen
Tasks
^^^^^
* cehteh will setup a initial repository (see link:RepositorySetup[proposed structure])
* ichthyo has setup a debian-APT-depot at http://deb/ichthyostega.de[] and will add backport packages there if necessary so the debian-people can stay near Etch/stable in the next time
* ichthyo volunteers to get the new source into a debian package structure from start (same as the current cinelerra is)
.And for later:
* decide on a Unit Test framework (see link:UnitTests_Python[this Proposal])
* can we get some continuous integration running somewhere (nightly builds, testsuite)?
* find a viable toolchain for writing more formal documentation. link:ReStructured[] Text, Docbook etc?
Pros
^^^^
Cons
^^^^
* the GIT submodules are just not there atm. we need to come along with one monolitic large project until they are available.
Alternatives
^^^^^^^^^^^^
* use visual studio and .NET :P
Rationale
~~~~~~~~~
The project will be tied to a distributed infrastructure/git. With recent git submodules support it should be easy to let contributors only checkpout/work on parts of the tree (plugins, documentation, i18n, ...). We want to build up a set of maintenance scripts in a ./admin dir.
At the moment we go for rather bleeding edge tools, because we want to stay at a given version to avoid incompatibility problems.
Later on a version switch needs agreement/notification by all devs.
Comments
--------
I am always in favor of getting the basic project organization and all scripting up and running very early in a project.
I would like if the project would take a rather conservative approach on the required Libs and Tools, so that finally,
when we get into a beta state, we can run/compile on the major distros without too much pain. I wouldn't completely
abandon the idea to target \*bsd and osx as well later on.
I would propose to move Doxygen to "required". The Idea to use scons sounds quite appealing to me at the moment.
Besides that, I think it could be moved to "Draft".
-- link:Ichthyostega[] [[DateTime(2007-06-17T00:18:40Z)]]
Moved to Draft. For Developer documentation I would prefer doxygen. For user documentation we can make a similar/same think like nicolasm did for cinelerra2, means wiki for edits, git to maintain it, based on gnu texinfo. Texinfo is quite limiting in its capabilities but it suffices, seeing the current cin2 docs, i say its rather well done.
We need to factor out some of the proposals from this page to subpages (scons, documentation, testing,...)
-- link:ct[] [[DateTime(2007-06-17T17:27:59Z)]]
It would really suck if we have to go through the experiences described http://freshmeat.net/articles/view/889/[here]. I have experienced parts of that in the past. I have only some beginner experience with writing autotoolized projects (mostly based on trial-and-error) and no experience in any other build system (such as scons). As such, I still believe that autotools can be manageable (for me personally) once the initial hurdle of learning is overcome.
I all for Doxygen documentation. Related to documentation are http://www.splint.org/[splint] http://www.splint.org/manual/html/appC.html[annotations] (comments). I suggest that we consider using such a tool for QA. Like link:ct[] said, this should be discussed in a subpage.
I agree with using currently bleeding-edge tools.
We have now a \'compatibility wiki\', finalized this proposal
-- link:ct[] [[DateTime(2008-03-26T13:43:26Z)]]
''''
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,64 @@
Design Process : Distributed Development Framework
==================================================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2007-06-07_
*Proposed by* link:ct[]
-------------------------------------
Distributed Development Framework
---------------------------------
Create our own toolset to track issues, tasks, bugs in a distributed manner.
Description
~~~~~~~~~~~
* Use git as backend
* Things get automatically updated/merged/pushed on a mob branch here
* Should be self-contained, checkout, ready to use
Tasks
~~~~~
* Serveral (shell?) scripts which ease the use
Pros
~~~~
Cons
~~~~
Alternatives
~~~~~~~~~~~~
Rationale
~~~~~~~~~
* To cut the administrative overhead down
Comments
--------
Made this 'final', this proposal got accepted and is already in use without much discussion
-- link:ct[] [[DateTime(2007-06-27T16:07:13Z)]]
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,89 @@
EDL's Are Meta-Clips
====================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2008-07-15_
*Proposed by* link:PercivalTiglao[]
-------------------------------------
EDLs Are Meta-Clips
-------------------
One very useful property of EDLs is the ability to contain other EDLs and treat these "inner" EDLs as a unit. The most logical implementation of this is to have EDLs themselves be able to be treated as an Clip-MObject. Recursion is known to be a powerful feature that is relatively simple to understand. By making EDLs recursive, some higher level features can be more easily implemented by taking advantage of this fact.
Description
~~~~~~~~~~~
There is a class of problems that this sort of behavior would help with.
First, you can organize a movie recursively. For example, you can create a large movie file and organize it into Introduction, Chapter1, Chapter2, Climax, and Conclusion. From there, you can edit Introduction EDL, then the Chapter1 EDL, and so forth.
From a bottom-up perspective, you can build a collection of Stock Footage (for example, transformation scenes, lip sync frames, or maybe running joke). You can then use the Stock Footage even if it isn't finished, and you can re-edit your stock footage later once you have a better idea of what you want. From there, the edits in these other files will still be in sync in the final render of the big project. Further, each instance of Stock Footage can be personalized by added effects on the timeline. Finally, one can create Stock Footage without being forced to render the file to disk first.
The usability benefits are obvious.
In all examples, rendering the main EDL implies that all of the "inner EDLs" have to be re-rendered if the inner EDL was modified. That is one of the only requirements.
Tasks
~~~~~
* Consider usability issues from the current Cinelerra userbase.
* Have the EDL Object (or create a proxy class) that extends MObject, Clip, AbstractMO, or some other class that would create this kind of behaviour.
* Consider and Detect infinite recursive cases. ie: File1 contains File2. File2 contains File1. This might produce infinite recursion while attempting to render the EDL.
* Implement higher level features in the GUI.
* Create "Compound Tracks" which contain multiple tracks within them.
* Create a GUI that can handle multiple open EDLs at the same time.
Pros
~~~~
* A low level feature that would greatly ease the creation of high level features.
* Multiple applications.
* Eases the use and maintenance of Stock Footage.
Cons
~~~~
* Possibly would have to rewrite a lot of code at the Engine level??
* Caching / Efficiency issues arise.
- Handling multiple instances of Lumiera running might be difficult. E.g. File1 contains File2. Both File1 and File2 are open by two different processes.
- Or maybe even multiple instances of Lumiera across computers that are connected to the same Drive. File1 is opened in Computer1 and File2 is opened in Computer2.
* A corrupted "inner EDL" or Stock Footage would "poison" the whole project.
Alternatives
~~~~~~~~~~~~
* Pre-Rendering Clips
- Unlike the current proposal, you would be unable to reedit sock footage on the mass scale and reapply it to the whole project.
- Moreover, rendering either introduces a generation loss or requires huge storage for raw (uncompressed) video.
* Loading the resources of the EDL -- This is an alternative way to load EDLs. This should also be supported. It would be an expected feature from the old Cinelerra2 userbase.
Comments
--------
* I got the inspiration of this idea from an email discussion between Rick777 discussing the Saya Video Editor. -- link:PercivalTiglao[] [[DateTime(2008-07-17T13:34:08Z)]]
* Hi Percival, thanks for writing this proposal. This is indeed a feature which was much discussed in the last months and I consider it to be included almost for sure. We always used the term '''meta-clip''' for this feature, thus I edited the headline (I hope you don't mind).
* Regarding the implementation, I choose a slightly different approach for the proc layer (actually, it's not yet there, but planned right from start, as I consider this meta-clip feature to be of uttermost importance): I'd prefer to add it at the level of the media source which is used by a clip. The rationale is, that at the level of the clip, there is no (or almost no) different behaviour if a clip pulls from a media file, from an life input or from another EDL. Thus, the implementation would be for a meta-clip to use a special media asset, which represents the output of the other EDL.
* Basically, the implementation is quite simple and doesn't necessiate much additional code (the power of recursion at work!). Further, I don't see any caching or efficiency problems. As you already pointed out, there are two fundamental problems
- We need a cycle detector when building the low-level model. ''But'' we don't need it solely because of meta-clips, we also need such a facility the moment we allow relatively free wiring of input-output connections (which we do plan anyway). My proposal is to flag the respective MObjects as erroneous, which should be visualized accordingly in the GUI
- We need a thouroughly complete handling for multichannel video and audio throughout the whole application. We need to get rid of the distinction into "video" and "audio" tracks. ''But'' again, this is not due to meta-clips solely, we should do so anyway because of multichannel spatial audio, 3D video and other advanced media to come. Thus, when every media is multichannel by default, and the builder can sort and handle connections with multiple stream types (which he does, or more correctly, was planned to do right from start), putting a meta-clip which pulls output from N channels with various mixed stream types from another EDL is not really a problem.
* The other "cons" listed above actually aren't directly connected or due to the existence of meta-clips, thus I wouldn't list them here.
- yes, it ''is'' true, concurrent changes to the session files may screw up things. But is this really an issue the Lumiera App should handle at all??
- yes, ''any corrupted part'' of the serialized session can mess up things. This is a real threat (see Cinelerra), but not limited to meta-clips. It is especially important, as you can expect users to work for months or years with a single session. Thus the integrity of the session is a value to be protected. That's the rationale why I put up the constraint in the proc layer that all important objects can only be created or re-created by some specialized factory, which in turn has the responsibility of never creating a corrupted object.
-- link:Ichthyostega[] [[DateTime(2008-07-27T22:15:01Z)]]
* I'll think about closures around seralized artefacts, the serialized stream can then be validated, unsupported or corrupted parts can be tagged as erroneous (means they become virtually readonly but they must be preserved) and circumvented. A lot of details have to be worked out here, iirc ichthyo already planned support for 'erroneous' nodes in the datamodell. I also think about some debugable plaintext dump format (maybe XML) then real corrupt things can be fixed manually with some efforts. Otherwise we handle gigabytes of video data and our most valuable resource is the few MB sized session file. I really aim to make that as robust as possible. Adding backups and redundancy there wont hurt.
-- link:ct[] [[DateTime(2008-07-30T16:03:04Z)]]
Conclusion
----------
This Design Entry concerns whether to include such a feature and discusses the general questions when doing so. As we for sure include meta-clip, and so so exactly in the way described here, this proposal is 'final' now.
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,187 @@
[grid="all"]
`------------`-----------------------
*State* _Idea_
*Date* _2010-04-16_
*Proposed by* link:Ichthyostega[]
-------------------------------------
Overview Engine Interface(s)
----------------------------
At the Engine Interfaces, Lumiera's Backend and Session get connected and work together to produce rendered output.
This design proposal intends to give an overview of the connection points and facilities involved,
to define some terms and concepts and to provide a foundation for discussion and working out the APIs in detail.
Participants
~~~~~~~~~~~~
*Render Process*:: represents an ongoing calculation as a whole
*Engine Model*:: encloses the details of the current engine configuration and wiring
*Dispatcher*:: translates a render process into the (planned) invocation of individual nodes
*Scheduler*:: cares for calculations actually to happen, in the right order and just in time, if at all
*Node*:: abstraction of an processing unit, supports planning by the dispatcher, +
allows to pull data, thereby driving the actual calculation.
Render Process
~~~~~~~~~~~~~~
The render process brackets an ongoing calculation as a whole. It is not to be confused with a operating system
process or thread; rather it is a point of reference for the relevant entities in the GUI and Proc-Layer in need
to connect to such a "rendering", and it holds the specific definitions for this calculation series. A render process
_corresponds to a single data stream_ to be rendered. Thus, when the play controller of some timeline in the model is
in _playing_ or _paused_ state, typically multiple corresponding render processes exist.
* there is an displayer- or output slot, which got allocated on creation of the process
* the process disposes calculated data frames "into" this slot
* the process can be paused/started and stopped (aborted, halted).
* some processes allow for changing parameters dynamically (e.g. speed, direction)
* each process has to ensure that the output/display slot gets closed or released finally
.Process parameters
A process is linked to a single stream data format (a -> link:StreamTypeSystem.html[stream implementation type]). +
It is configured with _frame quantisation_ and _timings_, and a _model port_ identifier and _channel selector_.
quantisation:: translates time values into frame numbers. (In the most general case this is a function, connected to the session)
timings:: a definition to translate global model time units in real clock time, including _alignment_ to an external time grid.
model port:: a point in the (high level) model where output can be produced. +
This might be a global pipe in one of the model's timelines, or it might be a _probe point_.
channel:: within the session and high level model, details of the stream implementation are abstracted. Typically,
a global pipe (master bus or subgroup) corresponds to a multichannel stream, and each of these channels
might be hooked up to an individual render process (we have to work out if that's _always the case_ or just
under _some circumstances_)
[NOTE]
===================
While certainly the port and channel definition is fixed, unfortunately the quantisation and the timings are'nt.
The timings may be changed in the middle of an ongoing render process, due to changed playback speed, shuffling
or requirements forwarded from chase-and-lock synchronisation to an external source. We still need to discuss if
Lumiera is going to support variable framerates (several media professionals I've talked to were rather positive
we need to support that -- personally I'm still in doubt we do). Variable framerates force us to determine the frame numbers
by an integration over time from a start position up to the time position in question. The relevant data to be integrated
is located in the session / high-level model; probably we'll then create an excerpt of this data, but still the less
quantisation will be a function of time. Anyway, it is the render processes job to translate all kinds of parameter
changes into relevant internal API calls to reconfigure the calculation process to fit.
===================
Engine Model (low-level Model)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The low level model is a network of interconnected render nodes. It is created by the build process to embody any
configuration, setup and further parametrisation derived from the high-level description within the session.
But the data structure of this node network is _opaque_ and considered an implementation detail. It is not
intended to be inspected and processed by outward entities (contrast this to the high-level model within the
session, which provides an extensive discovery API and can be manipulated by model mutating commands). We
just provide a set of _query and information retrieval functions_ to suit the needs of the calculation process.
The engine model is _not persisted._
* the engine model is partitioned by a _segmentation_ of the time axis. Individual segments can be hot-swapped.
* the engine has _exit nodes,_ corresponding to the model ports mentioned above
* each exit node provides a stream type definition plus quantisation and alignment constraints.
Thus, for any pair (port, time) it is possible to figure out a segment and an exit node to serve this position.
The segmentation(s) for multiple ports might differ. To allow for effective dispatching, the model should provide
convenience functions to translate these informations into frame number ranges. The mentioned quantisation and alignment
constraints stem from the fact that the underlying media source(s) are typically themselves quantised and the timings
might be manipulated within the processing chain. We might or might not be able to shift the underlying media source
(it might be a live input or it might be tied to a fixed timecode)
Processing Node
~~~~~~~~~~~~~~~
In this context, a node is a conceptual entity: it is an elementary unit of processing. It might indeed be a single
invocation of a _processor_ (plugin or similar processing function), or it might be a chain of nodes, a complete
subtree, it might _represent_ a data source (file, external input or peer in case of distributed rendering), or
it might stand for a pipeline implemented in hardware. The actual decision about these possibilities happened
during the build process and can be configured by rules. Information about these decisions is retained only
insofar it is required for the processing, most of the detailed type information is discarded after the
wiring and configuration step. As mentioned above, each node serves two distinct purposes, namely to
assist with the planning and dispatching, and to pull data by performing the calculations.
Nodes can be considered _stateless_ -- pulling a node has no effect outside the invocation context.
While a node _might_ actually be configured to drive a whole chain or subtree and propagate the pull request
_within_ this tree or chain internally, the node _never propagates a pull request beyond its realm._ The pull()
call expects to be provided with all prerequisite data, intermediary and output buffers.
Dispatching Step
~~~~~~~~~~~~~~~~
The dispatcher translates a render process into sequences of node invocations, which then can be analysed
further (including planning the invocation of prerequisites) and scheduled. This mapping is assisted by
the engine model API (to find the right exit node in the right segment), the render process (for quantisation)
and the involved node's invocation API (to find the prerequisites)
Node Invocation API
~~~~~~~~~~~~~~~~~~~
As nodes are stateless, they need to be embedded into an invocation context in order to be of any use. +
The node invocation has two distinct stages and thus the invocation API can be partitioned in two groups
Planning
^^^^^^^^
During the planning phase, the dispatcher retrieves various informations necessary to _schedule_ the following
pull call. These informations include
* reproducible invocation identifier, usable to label frames for caching
* opaque source identifier (owned by the backed) when this node represents a source
* prerequisite nodes
* index (channel) of the prerequisite's output to be fed as input buffer(s)
* number and size of the output buffers required
* additional memory required
* control data frame(s)
Node pull
^^^^^^^^^
* the pull call expects to be provided with all the resources announced during the planning step
* moreover, the pull call needs to know (or some way to figure out) the time coordinates
* after retrieving automation, the control flow forwards to the actual processing function
* there is an result/error code (assuming the scheduler prefers error codes over exceptions)
''''
Tasks
~~~~~
* find out if we need to support variable framerate
* find out about the exact handling of multichannel data streams
* design a buffer descriptor
* design a buffer designation scheme
* expand on the node identification scheme
* clarify how control data frames can be addressed
Discussion
~~~~~~~~~~
Pros/Cons/Alternatives
^^^^^^^^^^^^^^^^^^^^^^
Possible variants to consider....
Rationale
^^^^^^^^^
* allow for optimal resource use and avoid blocking of threads
* shift away complexity from the engine into the builder, which is by far not so performance critical
* allow to adjust the actual behaviour of the engine in a wide range, based on actual measurements
* create a code structure able to support the foreseeable extensions (hardware and distributed rendering)
without killing maintainability
Comments
--------
////////////////////
Conclusion
~~~~~~~~~~
* *accepted* / *dropped* by MMMMMM.YYYY developer meeting.
////////////////////
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,169 @@
[grid="all"]
`------------`-----------------------
*State* _Idea_
*Date* _2008-09-03_
*Proposed by* link:Ichthyostega[]
-------------------------------------
Describe pluggable modules by a "Feature Bundle"
------------------------------------------------
This proposal builds upon Cehteh's Plugin Loader, which is the fundamental mechanism for integrating variable parts into the application.
It targets the special situation when several layers have to cooperate in order to provide some pluggable functionality. The most prominent example are the "effects plugins" visible for the user. Because, in order to provide such an effect
* the engine needs a processing function
* the builder needs description data
* the gui may need a custom control plugin
* and all together need a deployment descriptor detailing how they are related.
Description
~~~~~~~~~~~
The Application has a fixed number of *Extension Points*. Lumiera deliberately
by design does _not build upon a component architecture_ -- which means that plugins
can not themselves create new extension points and mechanisms. New extension points
are created by the developers solely, by changing the code base. Each extension
point can be addressed by a fixed textual ID, e.g. "Effect", "Transition", ....
Now, to provide a pluggable extension for such an Extension Point, we use a *Feature Bundle*
Such a Feature Bundle is comprised of
* a Deployment Descriptor (provided as "structured data" -- TODO: define the actual data format)
* the corresponding resources mentioned by this Deployment Descriptor
The Deployment Descriptor contains
* Metadata describing the Feature Bundle
- ID of the Extension point
- ID of the Bundle (textual ID)
- ID of origin / provider (could be a domain name)
- Category (textual, tree-like)
- Version number (major, minor)
- required Extension point version number (or Lumiera version no.?)
- Author name (utf8)
- Support email (utf8)
- textual description in a single line (utf8)
* A List of Resources, each with:
- ResourceID
- SubID
- Type of Resource, which may be
. Plugin
. Properties
. Script
. ...?
- one of:
. the Resource provided inline in suitable quoted form (for textual resources only)
. an URL or path or similar locator for accessing the Resource (TODO: define)
- Additional Metadata depending on Type of Resource (e.g. the language of a script)
We do _not_ provide a meta-language for defining requirements of an Extension Point,
rather, each extension point has hard wired requirements for a Feature Bundle targeted
at this extension point. There is an API which allows code within lumiera to access
the data found in the Feature Bundle's Deployment Descriptor. Using this API, the code
operating and utilizing the Extension Point has to check if a given feature bundle is
usable.
It is assumed that these Feature Bundles are created / maintained by a third party,
which we call a *Packager*. This packager may use other resources from different
sources and assemble them as a Feature Bundle loadable by Lumiera. Of course, Lumiera
will come with some basic Feature Bundles (e.g. for colour correction, sound panning,....)
which are maintained by the core dev team. (please don't confuse the "packager" mentioned here
with the packager creating RPMs or DEBs or tarballs for installation in a specific distro).
Additionally, we may allow for the auto-generation of Feature Bundles for some simple cases,
if feasible (e.g. for LADSPA plugins).
The individual resources
^^^^^^^^^^^^^^^^^^^^^^^^
In most cases, the resources referred by a Feature Bundle will be Lumiera Plugins. Which means,
there is an Interface (with version number), which can be used by the code within lumiera for
accessing the functionality. Besides, we allow for a number of further plugin architectures
which can be loaded by specialized loader code found in the core application. E.g. Lumiera
will probably provide a LADSPA host and a GStreamer host. If such an adapter is applicable
depends on the specific Extension point.
The ResourceID is the identifyer by which an Extension point tries to find required resources.
For example, the Extension Point "Effect" will try to find an ResourceID called "ProcFunction".
There may be several Entries for the same ResourceID, but with distinct SubID. This can be used
to provide several implementations for different platforms. It is up to the individual Extension
Pont to impose additional semantic requirements to this SubID datafield. (Which means: define
it as we go). Similarly, it is up to the code driving the individual Extension point to define
when a Feature Bundle is fully usable, partially usable or to be rejected. For example, an
"Effect" Feature Bundle may be partially usable, even if we can't load any "ProcFunction" for
the current platform, but it will be unusable (rejected) if the proc layer can't access the
properties describing the media stream type this effect is supposed to handle.
Besides binary plugins, other types of resources include:
* a set of properties (key/value pairs)
* a script, which is executed by the core code using the Extension Point and which in turn
may access certain interfaces provided by the core for "doing things"
Probably there will be some discovery mechanism for finding (new) Feature Bundles similar
to what we are planning for the bare plugins. It would be a good idea to store the metadata
of Feature Bundles in the same manner as we plan to store the metadata of bare plugins in
a plugin registry.
Tasks
^^^^^
Pros
^^^^
Cons
^^^^
Alternatives
^^^^^^^^^^^^
Use or adapt one of the existing component systems or invent a new one.
Rationale
~~~~~~~~~
The purpose of this framework is to decouple the core application code from the details of
accessing external functionality, while providing a clean implementation with a basic set of
sanity checks. Moreover, it allows us to create an unique internal description for each
loaded module, and this description data e.g. is what is stored as an "Asset" into the
user session.
Today it is well understood what is necessary to make a real component architecture work.
This design proposal deliberately avoids to create a component architecture and confines
itself to the bare minimum needed to avoid the common maintenance problems. As a guideline,
for each flexibility available to the user or packager, we should provide clearly specified
bounds which can be checked and enforced automatically. Because our main goal isn't to
create a new platform, framework or programming language, it is sufficient to allow the
user to _customize_ things, while structural and systematic changes can be done by the
lumiera developers only.
Comments
--------
From a fast reading, I like this, some things might get refined. For example I'd strongly suggest to
make the Deployment Descriptor itself an Interface which is offered by a plugin, all data will then
be queried by functions on this interface, not by some 'dataformat'. Also Resource ID's and a lot
other metadata can be boiled down to interfaces: names, versions, uuid of these instead reiventing
another system for storing metadata. My Idea is to make the link:Plugin/Interface[] system self-describing
this will also be used to bootstrap a session on itself (by the serializer which is tightly integrated)
-- link:ct[] [[DateTime(2008-09-04T09:28:37Z)]] 2008-09-04 09:28:37
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,74 @@
[grid="all"]
`------------`-----------------------
*State* _Parked_
*Date* _2008-04-09_
*Proposed by* link:ct[]
-------------------------------------
Use Git Submodules to organize the project
------------------------------------------
We planned this long time ago when the project started, this proposal is for to work out the details and define a turnover point in time.
Description
~~~~~~~~~~~
There is a git-filter-branch command which helps in doing the dirty work isolating commits which touch certain dirs. This can moderately easily be used to create a new repository with a rewritten history containing only sub parts of the original history.
The basic idea is that one developer who wants to works on a certain subsystem clones the 'official' master and then updates and tracks only the development state of a certain subsystem.
Tasks
^^^^^
* what shall be in the master repository?
* boilerplate files, license, build infrastructure
* the _admin_ dir with supplemental scripts
* define which submodules shall be defined?
* _doc/devel_
* _doc/user_
* _wiki_
* _uml_
* _src/backend_
* _src/proc_
* _src/gui_
* _src/lib_
Not yet decided:
* _tests_ move them into the _src/$subsystem_ as symlink?
* _src/tool_
Pros
^^^^
* better isolation of single subprojects
* one who is interested on one subproject can track a master and only following certain subproject updates
* smaller/faster updates/downloads
Cons
^^^^
* needs some more git-fu to be used by the developers
* we will host considerably more git repositories (bigger list in gitweb), this is not a problem but might look more confusing
Alternatives
^^^^^^^^^^^^
Go as we do currently with one big repository per developer. The decision to use submodules is not urgend and it can be transfered at any time. The turnaround should just be planned and be scheduled to one day to minimize the confusion and merging issues.
Rationale
~~~~~~~~~
When all people get used to it it allows a cleaner more sane work flow and well isolated, less conflicting commits.
Comments
--------
We concluded that that submodules are not yet needed with exception for the ./doc folder. Parked for now.
-- ct 2008-07-26 09:09:57
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,107 @@
Design Process : Global Initialization
======================================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2008-04-05_
*Proposed by* link:ct[]
-------------------------------------
Global initialization call
--------------------------
Propose a central initialization facility.
Description
~~~~~~~~~~~
Setup a `src/common/lumiera_init.c` file which contains following functions:
* `int lumiera_init(s.b.)` initializes the subsystems, global registries, link:NoBug[] and other things. Must be called once at startup.
* `void lumiera_destroy(void)` shuts down, frees resources, cleans up.
Calling 'lumiera_init()' twice or more should be a fatal abort, calling lumiera_destroy() twice or more should be a non-op.
`lumiera_init()` returns 'int' to indicate errors, it may take argc/argv for parsing options, this is to be decided.
`lumiera_destroy()` is suitable for being called in an 'atexit()' handler.
Tasks
~~~~~
Implement this.
Pros
~~~~
Cons
~~~~
Alternatives
~~~~~~~~~~~~
Some things could be initialized with a `if (!initialized) {initialize(); initialized = 1;}` idiom. Parts of the C++ code could use different kinds of singleton implementations. Where needed this can still be done, but having a global initialization handler gives much better control over the initialization order and makes debugging easier since it is a serial, well defined sequence.
Rationale
~~~~~~~~~
We have some global things to initialize and prepare. It is just convinient and easy to do it from a cental facility.
Comments
--------
* You may have noted that I implemented an Appconfig class (for some very elementary static configuration constants.
See `common/appconfig.hpp` I choose to implement it as Meyers Singleton, so it isn't dependent on global static initialisation, and I put the NOBUG_INIT call there too, so it gets issued automatically.
* Generally speaking, I can't stress enough to be reluctant with static init, so count me to be much in support for this design draft. While some resources can be pulled up on demand (and thus be a candidate for some of the many singleton flavours), some things simply need to be set up once, and its always better to do it explicitly and in a defined manner.
* For the proc layer, I plan to concentrate much of the setup and (re)configuration within the loading of a session, and I intend to make the session manager create an empty default session at a well defined point, probably after parsing the commandline in main (and either this or the loading of a existing session will bring the proc layer up into fully operational state -- link:Ichthyostega[] [[DateTime(2008-04-09T02:13:02Z)]]
* About link:NoBug[] initialization: I've seen that you made a nobugcfg where you centralized all nobug setup. Problem here is that a change in that file forces a whole application rebuild. I'd like to factor that out that each subsystem and subsubsystem does its own NOBUG_FLAG_INIT() initializations, the global NOBUG_INIT should be done in main() (of testsuites, tools, app) and not even in the global initialization handler. Note that I use this global initialization in a nested way
------------------------------------------------------------
lumiera_init ()
{
lumiera_backend_init();
...
}
lumiera_backend_init()
{
...backend-global nobug flags ..
...backend subsystems _init() ..
...
}
------------------------------------------------------------
Backend tests then only call `lumiera_backend_init()` and dont need to do the whole initialization, same could be done
for `lumiera_proc_init()` and `lumiera_gui_init()`. Note about the library: i think the lib shall not depend on such an init, but i would add one if really needed.
-- link:ct[] [[DateTime(2008-04-09T19:19:17Z)]]
* After reconsidering I think we have several different problems intermixed here.
- Regarding organisation of includes: I agree that my initial approach ties things too much together. This is also true for the global "lumiera.h" which proved to be of rather limited use. Probably we'll be better off if every layer has a separate set of basic or global definition headers. I think the usage pattern of the flags (btw. the idea of making a flag hierarchy is very good!) will be much different in the backend, the proc layer and the gui.
- Initialisation of the very basic services is tricky, as always. Seemingly this includes link:NoBug[]. Of course one wants to use assertions and some diagnostigs logging already in constructor code, and, sadly enough it can't be avoided completely to run such already in the static intialisation phase before entering main(). My current solution (putting NOBUG_INIT it in the Appconfig ctor) is not airtight, I think we can't avoid going for something like a schwartz counter here.
- Then there is the initialisation of common serives. For these, it's just fine to do a dedicated call from main (e.g. to init the backend services and for creating the basic empty session for proc and firing off the event loop for the GUI). I think it's no problem to ''disallow'' any IO, any accessing of services in the other layers prior to this point.
- What is with shutdown? personally, I'd like to call a explicit shutdown hook at the end of main and to disallow any IO and usage of services outside each subsystem after this point. Currently, I have the policy for the proc layer to require every destructor to be called and everything to be deallocated, meaning that quite a lot of code is running after the end of main() -- most of which is libarary generated.
-- link:Ichthyostega[] [[DateTime(2008-04-12T04:56:49Z)]]
* Regarding organisation of includes:... agreed
* Initialisation of the very...
- I won't hesitate to add some C++ functionality to give link:NoBug[] an singleton initialization in C++
* Then there is the initialisation of common serives... agreed
* What is with shutdown?...
- Mostly agreed, I suggest to make all initialization code once-only, a second call shall bail out (under link:NoBug[]), all shutdown code shall be callable multiple times where subsequent calls are no-ops, this allows us to register at least some things in atexit() handlers, while we should add an explicit clean shutdown too, whenever that (or the atexit) handlers get really called is another thing, shutting down costs time and in emergency cases we first and foremost only want to shutdown things which fix some state for the next easier startup, clearing memory and process resources is only useful and has to be done when things run under a leak checker or as library. -- link:ct[] [[DateTime(2008-04-12T08:49:55Z)]]
* (./) now done the following:
- Moved lumiera.h and nobugcfg.h to proc/lumiera.hpp and nobugcfg.hpp (i.e. consider them now as Proc-Layer only)
- Changed Appconfig to support simple lifecycle hooks, especially a ON_BASIC_INIT. Rationale is that I don't have a lot of "magic" code in the Appconfig ctor, rather each subsystem in need of a basic initialisation can install a small callback. It can do so for other lifecycle events too.
- Added the usual magic static ctor to install those callbacks in case they really need an early init. Thus now nobugcfg.hpp can install a callback to issue NOBUG_INIT, error.hpp does the same for the unknown-exception handler. I'll expect the config query system to need somthing similar soon.
- For all remaining initialisation (in case it can't be done on demand, which of course is allways preferable) now main() issues and explicit call `Appconfig::lifecycle (ON_GLOBAL_INIT)` and similar fire off ON_GLOBAL_SHUTDOWN at end. Similar for the tests. We could add an init-call for the backend there too, either directly or by registering an callback, just as it fits in better.
- This system is extensible: for example I plan to let the link:SessionManager[] issue ON_SESSION_INIT and ON_SESSION_CLOSE events. E.g. AssetManager could now just install his callbacks to clean up the internal Asset registry
-- link:Ichthyostega[] [[DateTime(2008-04-14T03:40:54Z)]]
* Regarding shutdown my understanding is that ON_GLOBAL_SHUTDOWN does what is absolutely necessary (like flushing the sesion file, closing display and network connections, writing a backup or commiting to a database). I see no problem with bypassing the standard dtor calls in a release build (they add no value besides diagnostics and even may cause a lot of pages to be swapped in). We could even make this a policy ("don't rely on destructors or automatic shutdown code to do any cleanup of permanent importance")
* I made this final now, details are still in progress to be worked out, but we basically agreed on it iirc.
-- link:ct[] [[DateTime(2008-07-26T09:08:11Z)]]
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,50 @@
Design Process : How to proceed
===============================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2007-06-16_
*Proposed by* link:ct[]
-------------------------------------
How To Proceed
--------------
How we start...
Description
~~~~~~~~~~~
. Rewiew this wiki, link:DesignProcess[] .. do we want this formalism?.
. Setup git repos.
. Each developer makes a design sketch (bouml/wiki) about the subsystem he wants to care that is:
.. ichthyo: render engine
.. cehteh: data backend
Please add yourself above, contact people already working on something when you want to join.
Tasks
~~~~~
Pros
~~~~
Cons
~~~~
Alternatives
~~~~~~~~~~~~
Rationale
~~~~~~~~~
Comments
--------
* "Do we want this formalism": this level for formalism seems right at the moment. It will work only if we agree on it and do it always in this way. Every important (large scale) Issue, Question and Decision should be noted here and we need to be sure that nothing gets lost.
* For the time being this formalism is enough. Later on, I fear, we will need a bit more (and some Tool support)
-- link:Ichthyostega[] [[DateTime(2007-06-17T00:24:14Z)]]
* Accepted, deployed, done ... Final
-- link:ct[] [[DateTime(2007-06-27T16:13:25Z)]]
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,119 @@
Design Process : Interface Namespaces
=====================================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2008-09-18_
*Proposed by* link:ct[]
-------------------------------------
Interface Namespaces
--------------------
Interfaces and their implementations (plugins) need unique identifiers. We describe here what schemes shall be used to ensure this.
Description
~~~~~~~~~~~
What are the goals?
* We need unique identifiers.
* We dont want anyone to register at us, this shall be a free system.
* There are 2 kinds, one bound to persons and one to projects as whole.
* Uniquenes, not identity is the goal, plugins could even be provided anonymously.
* This is the lowest level interface stuff, usually you'll deal with a high-level descriptor interface which provides much better (human readable) metainformation about a plugin.
* The names should follow C identifier rules and either not to hard to deciper for a human or completely abstracted into a numeric ID like gpg id or uuid
* Conclusion followed some mailinglist and IRC discussion: (see http://lists.lumiera.org/pipermail/lumiera/2008-September/000054.html)[]
First part: unique prefix
~~~~~~~~~~~~~~~~~~~~~~~~~
Domain names and emails names encoding
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Domain names in lowercase, dots and special chars removed, first char must be a aphphanumeric character (if it is numeric, just write it out):
------------------------------------------------------------
lumiera.org -> lumieraorg
Gmail.COM -> gmailcom
99foo-bar.baz.net -> ninetyninefoobarbaznet
------------------------------------------------------------
These are used when the provider is a project and not an individual person.
If the provider of a interface is a individual person then he encodes his email address in a similar way
The @ sign is encoded as uppercase "AT":
------------------------------------------------------------
7of9@star-trek.net -> sevenofnineATstartreknet
------------------------------------------------------------
Abstract identifiers
^^^^^^^^^^^^^^^^^^^^
As alternative method one can use his gpg (or pgp) key ids or full fingerprints.
These are encoded as uppercase 'PGP' or 'GPG' followed with a sequence of hex digits (both upper and lower case allowed):
------------------------------------------------------------
GPGAC4F4FF4
PGP09FF1387811ADFD4AE84310960DEA1B8AC4F4FF4
------------------------------------------------------------
Next completely random identifiers (uuid's) are used by prefixing them with uppercase "UID" followed by some alphanumeric characters (no underline), no encoding is specified but must conform to be a C identifier, shall give a entropy of 128 bits:
------------------------------------------------------------
UIDd557753400ad4ac6912773b1deb4d99d
------------------------------------------------------------
Remarks: this are now quite a lot more or less unique encodings, notably we could allow them all, they dont clash with each other. They would be parseable if needed, but we never ever need to parse them, they are just taken as whole and have no other meaning then being unique.
Following Parts: hierachic namespace
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Lumiera itself will use some hierachic naming scheme for it interface declarations and implementations.
The details will be layed out next, genereally thinks look like:
------------------------------------------------------------
lumieraorg_backend_frameprovider
lumieraorg_plugin_video
------------------------------------------------------------
it is suggested that anyone providing plugins for lumiera follows this and extends it with his own identifier:
for examle joecoder``@freevideo.org writes a ultrablur then its identifier would look like:
------------------------------------------------------------
joecoderATfreevideoorg_plugin_video_ultrablur
------------------------------------------------------------
Tasks
~~~~~
The above described scheme will be implemented and used by me (cehteh).
Rationale
~~~~~~~~~
I believe that writing plugins for Lumiera shall be simple. We do not want some central registry or management. Anyone
shall be able to just start to write plugins. But that puts some reponsibility on the namespace so that all plugins can
coexist and their names don't clash. The above describes a very simple and flexible nameing system which anyone can
follow. It produces names which should be sufficiently unique for practical purposes. It leaves alternatives for
providing plugins as institutuion, individual or even anonymously.
Conclusion
----------
Accepted by October.2008 developer meeting
Addendum Internal Interfaces
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Interfaces which are internal and not meant for public use have 2 underscores after the prefix (eg: `lumieraorg__`). These interfaces must not be used by third party plugins, they are subject of unannounced changes or removal and make no guarantee about backwards compatibility. When we spot someone using this interfaces we ''will break'' his plugin ''intentionally''!
-- link:ct[] [[DateTime(2008-10-24T03:43:43Z)]]
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,60 @@
Design Process : Lumiera Design Process
=======================================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2007-06-03_
*Proposed by* link:ct[]
-------------------------------------
Define a Lumiera design process
-------------------------------
Set up a lightweight formalized process how people can add proposals for the Lumiera development.
Description
~~~~~~~~~~~
Just use this Wiki to make it easy to add proposals in a well defined manner.
I'd like to introduce a slightly formalized process for the ongoing Lumiera planning:
* Every proposal is instantiated as 'Idea', the author gives other people the opportunity to review and comment on it with extreme prejudice, while still working out details.
* When the the 'Idea' in a proper form and worked out in most details it becomes a 'Draft'. This 'Draft' need to be carefully reviewed, commented, perhaps corrected and rated by the other Developers.
* At some point we may decide that a 'Draft' becomes a 'Final' (I leave it open how this decision shall be done for now). 'Final' Documents will be imported into repository.
* Sometimes proposals will become dropped for some reason, this is indicated by changing their state to 'Dropped', they still stay in the system for further reference.
Tasks
~~~~~
* We need to refine link:Lumiera/DesignProcessTemplate[].
Pros
~~~~
* Simple
* Flexible
* No much rules
* Persistent and at Final stage well documented process
Cons
~~~~
* Could be abused/vandalized (but wiki can use ACL's)
* Depends on my server, this might be unfavorable or unreliable, ymmv.
* Will only work if all or almost all involved people agree on this process
Alternatives
~~~~~~~~~~~~
* We could use some forum, Trac, Mailinglist or whatever instead.
* Just for Design documentation 0I would give http://bouml.free.fr/[Bouml] a try. For myself, I am not very fond of UML Design tools, while Bouml looks quite promising and we could maintain the UML model in git repositories which would be more favorable than this centralized wiki. The backside is that this needs even more agreement between the developers, everyone has to install and use bouml (and learn its usage) and design is constrained by a external tool.
Rationale
~~~~~~~~~
Wiki works it is simple to use and just flexible enough to handle the task. I don't go to install any other software for such tasks on my server. While the design progresses I'd propose to move our work into git repositories and eventually phase this wiki pages out anyways. I'd rather like to start out distributed/git right away .. but git gives us only a fine storage layer, for a design process we need some good presentation layer (later when using git and starting the implementation everyones favorite editor serves for that) I have no better ideas yet to solve the presentation problem other than using this wiki (or maybe bouml).
Comments
--------
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,101 @@
Design Process: Lumiera Forward Iterator
========================================
[grid="all"]
`------------`-----------------------
*State* _Idea_
*Date* _2009-11-01_
*Proposed by* link:Ichthyostega[]
-------------------------------------
The situation focussed by this concept is when an API needs to expose a sequence of results, values or objects,
instead of just yielding a function result value. As the naive solution of passing an pointer or array creates
coupling to internals, it was superseded by the GoF http://en.wikipedia.org/wiki/Iterator[Iterator pattern].
Iteration can be implemented by convention, polymorphically or by generic programming; we use the latter approach.
Lumiera Forward Iterator concept
--------------------------------
.Definition
An Iterator is a self-contained token value, representing the promise to pull a sequence of data
- rather then deriving from an specific interface, anything behaving appropriately _is a Lumiera Forward Iterator._
- the client finds a typedef at a suitable, nearby location. Objects of this type can be created, copied and compared.
- any Lumiera forward iterator can be in _exhausted_ (invalid) state, which can be checked by +bool+ conversion.
- especially, default constructed iterators are fixed to that state. Non-exhausted iterators may only be obtained by API call.
- the exhausted state is final and can't be reset, meaning that any iterator is a disposable one-way-off object.
- when an iterator is _not_ in the exhausted state, it may be _dereferenced_ (+*i+), yielding the ``current'' value
- moreover, iterators may be incremented (+++i+) until exhaustion.
Discussion
~~~~~~~~~~
The Lumiera Forward Iterator concept is a blend of the STL iterators and iterator concepts found in Java, C#, Python and Ruby.
The chosen syntax should look familiar to C++ programmers and indeed is compatible to STL containers and ranges.
To the contrary, while a STL iterator can be thought off as being just a disguised pointer, the semantics of Lumiera Forward Iterators is deliberately reduced to a single, one-way-off forward iteration, they can't be reset, manipulated by any arithmetic, and the result of assigning to an dereferenced iterator is unspecified, as is the meaning of post-increment and stored copies in general. You _should not think of an iterator as denoting a position_ -- just a one-way off promise to yield data.
Another notable difference to the STL iterators is the default ctor and the +bool+ conversion.
The latter allows using iterators painlessly within +for+ and +while+ loops; a default constructed iterator is equivalent
to the STL container's +end()+ value -- indeed any _container-like_ object exposing Lumiera Forward Iteration is encouraged
to provide such an +end()+-function, additionally enabling iteration by +std::for_each+ (or Lumiera's even more convenient
+util::for_each()+).
Implementation notes
^^^^^^^^^^^^^^^^^^^^
*iter-adapter.hpp* provides some helper templates for building Lumiera Forward Iterators.
- _IterAdapter_ is the most flexible variant, intended for use by custom facilities.
An IterAdapter maintains an internal back-link to a facilitiy exposing an iteration control API,
which is accessed through free functions as extension point. This iteration control API is similar to C#,
allowing to advance to the next result and to check the current iteration state.
- _RangeIter_ wraps two existing iterators -- usually obtained from +begin()+ and +end()+ of an STL container
embedded within the implementation. This allows for iterator chaining.
- _PtrDerefIter_ works similar, but can be used on an STL container holding _pointers,_
to be dereferenced automatically on access
Similar to the STL habits, Lumiera Forward Iterators should expose typedefs for +pointer+, +reference+ and +value_type+.
Additionally, they may be used for resource management purposes by ``hiding'' a ref-counting facility, e.g. allowing to keep a snapshot or restult set around until it can't be accessed anymore.
Tasks
^^^^^
The concept was implemented both for unit test and to be used on the _QueryResolver_ facility; thus it can be expected
to show up on the session interface, as the _PlacementIndex_ implements _QueryResolver_. QueryFocus also relies on that
interface for discovering session contents. Besides that, we need more implementation experience.
Some existing iterators or collection-style interfaces should be retro-fitted. See http://issues.lumiera.org/ticket/349[Ticket #349]. +
Moreover, we need to gain experience about mapping this concept down into a flat C-style API.
Alternatives
^^^^^^^^^^^^
. expose pointers or arrays
. inherit from an _Iterator_ ABC
. unfold the iteration control functions into the custom types
. define a selection of common container types to be allowed on APIs
. use _active iteration,_ i.e. pass a closure or callback
Rationale
~~~~~~~~~
APIs should be written such as not tie them to the current implementation. Exposing iterators is known to create
a strong incentive in this direction and thus furthers the creation of clean APIs.
Especially in Proc-Layer we utilise already several iterator implementations, but without an uniform concept, these remain
just slightly disguised implementation types of a specific container. Moreover, the STL defines various and very elaborate
iterator concepts. Ichthyo considers most of these an overkill and an outdated aproach. Many modern programming languages build
with success on a very simple iterator concept, which allows once to pull a sequence of values -- and nothing more.
Thus the idea is to formulate a concept in compliance with STL's forward iterator -- but augmented by an stop-iteration test.
This would give us basic STL integration and look familiar to C++ and Java programmers without compromising the clean APIs.
Comments
--------
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,68 @@
Design Process : Manifest
=========================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2007-06-09_
*Proposed by* link:ct[]
-------------------------------------
Manifest
--------
This Proposal describe the general ideas how the community will work together to create Lumiera.
Description
~~~~~~~~~~~
Note: I start with my personal opinions, this needs to be refined and worked out.
Please feel free to add new points or comment on things.
Background
~~~~~~~~~~
Cinelerra is quite an old project, there is an original version from heroinewarrior.com and a community fork at cinelerra.org. The original author claims that there was no-one producing useable input despite their proposes while cinelerra was in development, and indeed the cinelerra.org community only feeds back the source released by the original author into their SVN repository and maintains few fixes. There is not much development going on. Some people have created new functionality/features from time to time which have rarely been merged into the main repository and maintained by themselves.
The Cinelerra community is a quite loose group of individuals, there is some fluctation on the developer base and almost all developers have day jobs which restricts their involvement time on the cinelerra project.
Some of these things work quite well, there is an overall friendly relation between the involved people. People who know C++ and have the time to edit the source have satisfactory added their own features. The Mailing-list and the IRC channel is also quite helpful and even new users who ask stupid questions are welcome.
But there are some bad things too. Notably there is not much progress on the community development. Users don't benefit from new improvements which other people have made. There is a endlessly growing list of bugs and feature requests, when someone sends a patch to the ML he has to invest quite some time to maintain it until it might be merged. Finally we don't know what heroine virtual is working on, until we see his next tarball.
Solution for Lumiera
~~~~~~~~~~~~~~~~~~~~
We are in need of a new development model which is acceptable by all involved people and benefits from the way Cinelerra development worked the years before, without maintaining the bad sides again:
. *Make it easy to contribute*
Even if it is favorable when we have people which are continously working on Lumiera, it's a fact that people show up, send a few patches and then disappear. The development model should be prepared for this by:
.. Good documentation
.. Well defined design and interfaces
.. Establish some coding guidelines to make it easy for others maintain code written by others
.. Prefer known and simple aproaches/coding over bleeding edge and highly complex techniques
. *Simple access*
We will use a fully distributed development model using git. I'll open a anonymous pushable repository which anyone can use to publish his changes.
. *Freedom to do, or not to do*
The model allows everyone to do as much as he wants. In a free project there is no way to put demands on people. This is good since it's easy to join and is open for anyone. The community might expect some responsibility for people maintaining their patches, but at worst, things which don't match our expected quality and when there is noone who keeps them up, will be removed. Since we are working in a distributed way with each developer maintaining his own repository and merging from other people, there is no easy way that bad code will leap into the project.
. *No Rule is better than a Rule which is not engaged*
We have to agree on some rules to make teamwork possible. These rules should be kept to a minimum required and accepted by all involved people. It is vital that we can trust each other on simple things, like properly formatted code or that patches one proposes to merge don't break the system etc..
. *Legal status must be clear*
Lumiera is developed under the GPL, every contributor must acknowledge this. Even when we provide anonymous commits, every non trivial patch should be traceable to the person who made it, GPG signatures would be proper here - details need to be worked out.
. *All for Lumiera*
The goal is to make the best Linux video editor to date, nothing less. Everyone puts in their best abilities. This project is not the place to blame people for things where they are not profound, help each other, make things right instead of blaming someone. Everyone should rate himself at what he can do best on the project.
Comments
--------
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,76 @@
[grid="all"]
`------------`-----------------------
*State* _Idea_
*Date* _2008-10-10_
*Proposed by* link:Ichthyostega[]
-------------------------------------
the Marble Mode
---------------
''dual working styles -- build up from small pieces of clay or cut away the unneeded parts from a block of marble''
While the usual UI of video editors quite well supports a working style assembling the result from small building blocks by relying on clips (media objects)
Description
~~~~~~~~~~~
This proposal stems from an discussion on the Mailinglist starting with the quote from Walter Murch "Marble and Clay".
+
It is thought to be in support and to complement nasa's [wiki:self:../DelectusShotEvaluator Delectus Shot Evaluator]
The central Idea is to remove the difference between "viewing", i.e. the media viewer and the timeline/Sequence on the other hand. Lumiera is designed to handle multiple Sequences, which can even arbitrarily be embedded in one another (the so called _meta-clips_ ). Basically, Sequences are a comparatively cheap resource, so the idea is to create a new Sequence on-the-fly to do the viewing already based on a complete timeline. It is up to the user finally to promote one of these workbench-like timelines to become "the" master timeline.
To make this usable, in the GUI there should be a slightly different representation which aims at reducing vertical screen usage. Also the track heads could be reduced, e.g. we don't need controls for mixing and panning, the effect stacks could be reduced to a simple mark indicating that there is any effect in a given time range, anything concerned with the fine points of wiring, tweaking effects and controling automation could be left out deliberately. This would allow us to have several independant timelines above/below each other. There could be at least two, maybe even three or four "slots" which could be allocated by a timeline to display. Every time you open a new media, a new Sequence will be created on the fly and a new timeline display of this Sequence will be available, replacing the least recently used timeline display slot. Of course, re-visiting an already opened media will bring back the corresponding timeline in the state you left it, with markers, notes, maybe even trimmings and added clips. Contrast this GUI mode with the usual working mode (the "clay mode"), where there is _one_ central timeline, probably with tabs to switch between multiple independant Sequences (including the ones which actually are embedded in another timeline as meta-clips)
Basically, each of these timelines has a separate, independant transport, but transports can be locked together, and in locked state you can displace/offset the locked partners relative to one another. Moreover, there would be at least two viewer windows which would be automatically connected to recieve the ouput of the timelines as new timelines are placed in the visible slots to work with. To round things up, we need good keybindings for navigtation, and of course you can liberally mark parts and spill them over to another timeline, either overwriting or shifting existing footage there.
Technically, to support this working mode, _opening a media_ would:
* create a clip containing the whole media
* on-the-fly create new Sequence containing this clip
* allocate the next usable display slot and create a timeline display featuring this Sequence there
Initially this new Sequence would be anonymous. But the moment you do the first non-trivial modification there (like adding a label, trimming off parts, adding /deleting tracks), the new Sequence would be promoted to be a named and persisted entity, which from then on could itself serve as a new "pseudo-media". It would appear as an asset on its own (probably in a special sub category), and it could be used as a source to create clips from. This way, you could work with your media, prepare it, augment it even by adding effects like colour correction. And because it's a real Sequence, you could do non-trivial things there right in-place, like adding new sub-tracks, placing other media on them -- and then later on use this prepared media like a real media captured from camera source.
Finally, there should be the possibility to "play" a clip bin, thereby on-the-fly creating a new Sequence filled with all the clips in the order they were arranged in the bin. This would yield a bridge to the more clip-oriented working style and also provide a cheap implementation of the "sparse timeline" or "storyboard mode"
Tasks
^^^^^
* have several switchable _perspectives_ or working modes in the GUI
* associate a _workflow state_ whith each Sequence, to track when an Sequence is just anonymous, gets a named entity, is a clip-bin-tied Sequence, or finally is the master Sequence connected to the global output pipes section.
* work out the details of the "display slot allocation"
* provide an "opening media" compound function, comprised of
* creating the clip covering the whole media (./) (already implemented)
* creating a new Sequence and populating it with this clip
* make locked-together transports work
* in the GUI (transport controls)
* for coordinating the corresponding playback/render schedules (playback controller, which is located in the backend according to our current planning)
Rationale
^^^^^^^^^
Lumiera is not pioneering the video editing by computers. We are sort of second-generation (or even third generation) of computer based editing systms. The tradition of conventional, film based editing clearly shows us these two quite different working approaches, which obviously can have quite some impact on the resulting style and rythm of the final movie. The distinguishing property of the working style to be supported by the "marble mode" is that it bypasses the state of creating and organizing clips, but rather directly evolves the footage into the final cut.
This working style is dual to the common clip based approach, none of them is superior or inferior, thus we should actively support both working styles.
Comments
--------
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,45 @@
Design Process : Shared Master Repository
=========================================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2008-04-08_
*Proposed by* link:ct[]
-------------------------------------
Shared Master Repository setup
------------------------------
This describes how the shared MASTER is set up and syncronized.
Description
~~~~~~~~~~~
We have now a shared master repository in /git/LUMIERA. The public/anonymous git url is 'git://git.lumiera.org/LUMIERA', for people with ssh access it is pushable at 'git.lumiera.org:/git/LUMIERA'.
The repository is maintained by cehteh. It updates daily by a script.
There are the following branches updated from their respective maintainer repositories:
[grid="all"]
`-------------`----------------------------------------------------`----------------------------
*BRANCHNAME* *DESCRIPTION* *Automatic updated from*
'master' Stable branch, use this as generic entry point. cehteh, ichthyo
'library' Support library development cehteh, ichthyo
'proc' Render core development ichthyo
'backend' Data backend development, cehteh
'gui' GUI development joel
------------------------------------------------------------------------------------------------
Automatic syncronization is only done for 'fast-forward' updates, conflicting changes are rejected. It is still possible to manually push to this repository to override automatic syncronization.
Please suggest changes for this setup when required (new branches, difefrent maintainers, ...)
Comments
--------
Instead this polling @daily update maintainers might use git hooks on their repos to push relevant things, be careful not to push cruft or tags (which tags shall be present here is not yet resolved -> no tags for now)
-- link:ct[] [[DateTime(2008-04-08T21:48:51Z)]]
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,262 @@
Design Process : Mistakes to avoid
==================================
[grid="all"]
`------------`-----------------------
*State* _Dropped_
*Date* _2008-04-21_
*Proposed by* link:rick_777[]
-------------------------------------
Mistakes to avoid in the Lumiera design
---------------------------------------
As a multimedia user and experienced programmer, I've found various flaws present in open source Non Linear Video editors. Here I will list the problems and their proposed (or mandatory) solutions. Please forgive me if some of the ideas here have already been approved, I wrote this text before reaching this wiki.
Description
~~~~~~~~~~~
As a multimedia user and experienced programmer, I've found the following flaws present in open source Non Linear Video editors (your mileage may vary) :
. Frequent crashes (which most of the time make you lose your work)
. Reinventing the wheel for every new project
. Lack of a user-friendly (and extensible) UI
. Lack of support for certain video formats or codecs
. Lack of documentation
. Lack of cross-platform support
. Dependency on scripted languages like Python, which make installation a mess
I will expand on the problems and their proposed (or mandatory) solutions.
1. Frequent crashes
~~~~~~~~~~~~~~~~~~~
[grid="all"]
`------------`------------------------------------------------------
*Problem* Frequent Crashes and unsaved work.
*Severity* CRITICAL.
*Solution* Isolating the UI from the rendering and data handling (also improves the extensibility)
*Required* Yes
*Workarounds* Auto-save (however it's not a real solution for the problem)
--------------------------------------------------------------------
Working with multimedia (video / audio) editing is a magnet for segfaults (crashes) due to the handling of pointers and compression algorithms. A bug in a plugin (like in Audacity's low-pass filter) will crash and you suddenly realize you lost your work - unless you have an auto-save feature, but that doesn't go to the root of the problem.
My proposal is to move the low-level handling of video to a separate process, which then will do the processing - if it crashes, the UI will only report an error with a dialog (i.e. "the process crashed. Try again?"), but you work will stay safe. I'm not sure of the implementation difficulties that arise from having a shared memory buffer for rendering / processing, but one thing is certain: Whenever you move the cursor or rewind a part of a clip in your resources, the application isn't supposed to crash. Just moving the cursor isn't a time-critical task, so perhaps we can use temporary files for this. It's safer if you're not doing the final rendering.
Comments
^^^^^^^^
I am not sure yet about separating things into processes, generally it is clear that this would be more robust but there are some performance impacts and programming problems (massisve amounts of data in shared memory). But most importantly, when a subprocess gets a job and crashes on it, it won't complete the job, we don't have a choice except gracefully abort it. From a user perspective "It doesn't work!" there is no much difference to a complete crash. Well and yes we aim to make it crash proof rather, crashes a bugs and have to be fixed, point.
Lumiera will never ever loose work, we don't plan to make a project file, autosafe way. Lumiera will keep projects in an internal database like format which consists of a Dumpfile and a contingous written logfile. After a crash/powerdown whatever, this log just gets replayed. The advantages are countless, imagine persistent, selective undo and so on. Any other format (cinelerra2 XML, MXF, ...) will be realized by importer/exporter plugins.
-- link:ct[] [[DateTime(2008-04-21T11:27:23Z)]]
2. Reinventing the wheel for every new project
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[grid="all"]
`------------`------------------------------------------------------
*Problem* Various projects compete and reinvent the wheel
*Severity* Serious (Slows down development time. A lot)
*Solution* Multi-tier design, turn the data handling into a backend and use whatever UI you prefer
*Required* Yes. Better now that the project hasn't started
---------------------------------------------------------------------
Imagine the Linux kernel was tied to the window manager. You would have to stick with KDE or GNOME and you couldn't improve it! Fortunately it's not like that for Linux, but it is for some projects. If you want to change the user interface from QT to wxWidgets or GTK you'll need to rewrite every module.
If you separate the UI from the project handling engine, you can simply fork the project and change the UI to one that supports skinning, without having to do the complicated video-processing stuff.
Separating the processes has an equivalent for web programming, it's called "separation of concerns", or multi-tier design. When you suddenly change the database engine, you don't need to change the whole program, just the database module. Same goes for changing the UI from HTML to XML or Flash. If they're separate modules that only communicate through a clearly defined API.
Example case 1: The Code::Blocks IDE. The compiling engine supports various compilers, and the engine itself is only a plugin for the main editor. If the compiler crashes, you only get an error, but the IDE doesn't crash (unless it's the UI part that's doing something wrong).
Example case 2: Chessmaster. The user interface and speech synthesis stuff only call the chess engine, called "theking.exe". Linux chess games also depend on an engine to do the thinking.
So I suggest to split the project into four separate tiers (not necessarily processes):
. User interface - communicates with the "project" tier, handles the user events and does the calls.
. The project tier - the main part of the video editor. This one invokes the renderer and decides which effects to apply, saving them as mere parameters for later processing. It also tells you where the current pointer for the track view is. Also calls the rendering engine for the current frame, or for previews of a certain special effect. Note that if this process keeps running even if the GUI crashes, later we can restart the GUI and keep working.
. The rendering engine - This one must be a separate process for the reasons stated in problem #1. This also gives us the advantage that it can work on the background while we keep working on the project (after all the project is just a set of data stating which effects to apply to which tracks, and which files are used for the tracks) - instead of just having a window saying "Rendering, please wait". Even Adobe Premiere Pro suffered from this problem. This means that if we put enough effort, we can surpass commercial software in certain areas. Note that the rendering engine uses the same API than the project tier, as it works on a copy of the project when doing the final rendering.
. The video processing wrapper, which has interfaces for different video processing toolkits (DirectX, GStreamer, etc). This also makes the project cross-platform. Tiers 1 and 2 can go in one process, and the 3 and 4 in another (this would make tier 2 a library which defines a C++ Class, and tier 4 would also be a library which is used by the rendering engine).
By separating the tiers, these can later become their own projects and overall the community would receive great benefits.
Comments
^^^^^^^^
Please look at our design drafts, things will be separated (little different than you describe here). We reuse things which are benefitful (gavl, ffmpeg, ..) but we are also aware that we reinvent the wheel for some things by intention. Lumieras goal is not just to glue some existing libraries together under a new gui, there are already a lot projects trying this way. We rather aim for a ''Professional'' high performance Video editing solution which does some things in a different (maybe more complex) way. We do not use existing frameworks like MLT or gstreamer because we believe that these do not fit our goals (gstreamer will be supported through plugins). We do not produce yet another multimedia framework library (this only happen by coincidence) to be used by others.
-- link:ct[] [[DateTime(2008-04-21T11:27:23Z)]]
3. Lack of a user-friendly and extensible UI.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[grid="all"]
`------------`------------------------------------------------------
*Problem* Often, editors provide a very poor and buggy interface. Examples: Jahshaka doesn't even provide tooltips for the various tools, and the documentation is poor; In Cinelerra I've noticed some bugs when using the open dialog, I'd rather have the KDE one, thanks.
*Severity* From Annoying to Serious.
*Solution 1* Use a library that allows you to use different widget libraries, like wxWidgets.
*Required* Recommended, but not obligatory.
*Solution 2* Write different user interfaces, but they'd be hard to maintain.
*Required*, No.
---------------------------------------------------------------------
This problem is complicated, we need a good framework for handling the tracks. Perhaps this could become a separate project. Ideas are welcome.
Comments
^^^^^^^^
Joel started working on a GUI recently and making good progress. The UI should finally be quite flexible as it mostly provides a skeletion where plugins render to. We have quite a lot ideas about the UI and user input is welcome. The UI is currently the most separate tier in the design, i'd like to make it a plugin itself which is loaded when lumiera is started in a gui mode, but it is to early to say how exactlly it will be integrated, except that we all agree that GUI is optional and Lumiera can also run headless, script driven.
-- link:ct[] [[DateTime(2008-04-21T11:27:23Z)]]
4. Lack of support for certain video formats or codecs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[grid="all"]
`------------`------------------------------------------------------
*Problem* Lack of support for certain video formats or codecs.
*Severity* Critical.
*Workarounds* 1. Give a help page for the user to do his own conversion, but this is very inelegant, annoying, and a waste of time. 2. Provide conversion on the fly, and keep a separate "preprocessed" copy of the imported clip in a separate directory. This is a nice middle ground, IMHO.
*Solution* Use a wrapper library as stated in problem # 2, having a plugin-based design is recommended.
*Required* Yes.
---------------------------------------------------------------------
Some editors like Cinelerra are hardwired into using one format, or have a phobia to certain formats / codecs (i.e. DivX AVI's). If we separate the project editing engine from the video handling libraries, we can use unknown formats by simply providing an input/output plugin. This would allows us to use files encoded with lossless codecs like http://lags.leetcode.net/codec.html[Lagarith]. This also provides forward compatibility for future formats.
Comments
^^^^^^^^
Lumiera is a video editor we don't care (*cough*, not really true) about video formats. Everything which comes In and goes Out is defined in plugins which handle video formats. We currently decided to use 'gavl' because it is a nice small library which does exactly what we want. Later on gstreamer and other such kinds of decoder/encoder/processing-pipe libs will be realized.
-- link:ct[] [[DateTime(2008-04-21T11:27:23Z)]]
5. Lack of documentation
~~~~~~~~~~~~~~~~~~~~~~~~
[grid="all"]
`------------`------------------------------------------------------
*Problem* Some video editors have very poor documentation (and that's an understatement *cough* Jahshaka *cough* )
*Severity* Critical.
*Solution* Have a team for the documentation.
*Required* Yes.
---------------------------------------------------------------------
Nuff said.
Comments
^^^^^^^^
Quote from Ohloh.net: (http://www.ohloh.net/projects/lumiera)[]
------------------------------------------------------------
Extremely well-commented source code
Lumiera is written mostly in C++.
Across all C++ projects on Ohloh, 23% of all source code lines are comments. For Lumiera, this figure is 46%.
This very impressive number of comments puts Lumiera among the best 10% of all C++ projects on Ohloh.
------------------------------------------------------------
Nuff saied... Oh well, about user docs we like to get that impressive ratings there too, any helpers?
-- link:ct[] [[DateTime(2008-04-21T11:27:23Z)]]
6. Lack of cross-platform support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[grid="all"]
`------------`------------------------------------------------------
*Problem* Where's my Windows version?
*Severity* Blocker
*Solution* Use a cross-platform toolkit for the UI.
*Required* Depends, do you plan to make it Cross-Platform?
--------------------------------------------------------------------
A good example for this is the Code::Blocks IDE, which was thought of being cross-platform from the beginning. Curiously, at first the project was Windows-only, and its only F/OSS alternative was Dev-C++ from Bloodshed (eew). Otherwise you'd have to stick with proprietary applications like Visual C++.
In Linux there were various IDE's, but they were Linux-only. Since Code::Blocks uses a cross-platform toolkit (wxWidgets), it can be compiled either in Windows and Linux. There are RPM's for various distros now that the first public version (8.02) got out. I've heard that QT is also cross-platform, but I haven't tried it yet.
Of course - if you separate the UI from the project engine, someone could make his own Windows UI for the project. Now what needs to be taken care of, is that the rendering libraries are cross-platform too.
Comments
^^^^^^^^
We refuse to make it cross platform intentionally. Most things are written portable, POSIX compatible, some might need platform specific fixes. But our target is primary Linux (because thats what we use) secondary any other Free OS (hopefully we find some testers/maintainers for that). Lumiera ''might'' run on OSX and patches will be accepted, but it is not a free platform so we don't care by ourself. Windows due its diffrent system interfaces will be hard to port, if someone wants to do that, have fun, we will accept patches to, but we do not support it in *any* way by ourself.
-- link:ct[] [[DateTime(2008-04-21T11:27:23Z)]]
7. Dependency on scripted languages like Python, which make installation a mess
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[grid="all"]
`------------`------------------------------------------------------
*Problem* Installation can be a mess if we depend on scripted languages.
*Severity* Annoying, the end user might just conform with another project that "just works".
*Solution* Make it in C++ or other easily-compilable language.
*Required* VERY recommended.
---------------------------------------------------------------------
I've had to install several packages for my distro (whose repository is not as large as others like Ubuntu's) from source. Some of them depend on very esoteric scripted languages which I also need to install. And then the libraries, etc. My suggestion is to free the end user from this burden, and work on a common language, like C++.
Comments
^^^^^^^^
At some point a scripting language ''will'' be required, yet to drive the testsuite, make headless rendering work and so on. We need to provide installation instructions and/or even bundle this language with Lumiera. This will likely become a small embedded language like Lua or some kind of forth (or maybe some scheme?) it should not depend on strange modules which are not part of the core scripting language distribution (or we shall provide them too), needs to be worked out.
-- link:ct[] [[DateTime(2008-04-21T11:27:23Z)]]
Author's comments
^^^^^^^^^^^^^^^^^
Some of the measures stated in this document are optional, but separating the processes for the rendering engine, editor and User Interface are the optimal solution and required to avoid common problems.
Discussion
----------
Mostly we agree with the general statements in this Design Entry. But there are some points which don't stand the test of a detailed technical discussion. For example, you simply can't state it's a 'mistake' not to write code which similarly runs on windows and *nix. Well. You could try to write it in Java. See my point? While today it's quite feasible to write office stuff or banking applications in a cross-platform manner, a video editor still is a different kind of a beast.
A similar argumentation holds true for the question, wether or not to use separate processes and IPC. While it certainly is a good idea to have the X server or a database running in a separate process, the situation is really quite different for editing video. Hopefully it's clear why.
Could you please rework this Design Entry in a way that we can finalize (accept) it?
* Please remove the section about windows
* Please separate out things needing technical discussion and are not just "mistakes", thus retaining only the big picture statements (on which we all agree)
* How to secure the application against crashes
* If it is viable/desirable to run the gui in a separate process really needs in-depth technical discussion (create a new Design Entry for this)
* How to deal with the dependencies problem in combination with plugins/extensions and script languages
-- link:Ichthyostega[] [[DateTime(2008-10-05T01:51:50Z)]]
Conclusion
----------
The October.2008 dev meeting decided to 'drop' this design proposal as is.
Basically, this text just tells us "to make Lumiera good", and especially it contains a mixture of topics
* We fully agree to 80% of the statements made there, but we think those statements are so very basic and self-evident as to be considered off-topic here. We are aware of the recurring problems with open source video editing. That's why we are here.
* The proposal draws conclusions on two technically substantial points, at which we don't agree. And it fails to provide sufficient (technically sound) arguments to prove these statements.
While it is certainly 'desirable' to be cross-platform as much as possible and especially '''target Microsoft Windows''', we don't see much possibilities with today's mainstream technology to build an application which is as technologically demanding as a video editor is. We would end up developing two or even three sister applications, or we are forced to sacrifice performance for portability. When put up to face such options, we have a clear preference to concentrate on a really free and open platform.
While it is certainly 'desirable' to make the application as robust as possible, we don't see how '''using multiple separate processes''' could help us with this goal ''without creating major scalability or performance problems'' due to the use of shared memory. And, yet more important: we don't share the basic assumption made in the proposal, namely that video processing is inherently dangerous. We think the basic algorithms involved are sufficiently well-known and understandable to implement them in a sound manner.
''''
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,74 @@
Design Process : IRC Developer Meeting
======================================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2007-06-15_
*Proposed by* link:ct[]
-------------------------------------
IRC Developer Meeting
---------------------
Meet regularly on IRC to discuss about short time issues.
Description
~~~~~~~~~~~
Set up some schedule for a regular developer meeting on IRC. The idea is that we have some time where we can resolve some issues which requires group communication quickly.
Things like:
* Who works on what, whats going on
* Who needs help for something
* Make Decisions about pending proposals
We should make small 'to be talked about' list beforehand (on the wiki) and write an also small protocol afterwards about the decisions made.
I'd suggest to do this first Friday of each month at 21:00GMT, thats late evening in Europe and afternoon in USA. Details need to be acknowledged.
Tasks
~~~~~
Pros
~~~~
Cons
~~~~
* Inflexible - People's lives and schedules can easily change within a few months thus making them unavailable for further meetings
Alternatives
~~~~~~~~~~~~
* We should decide to meet every 4th week. The exact day and time shall be decided 1 week prior to the meeting week to give members time to prepare and resolve scheduling conflicts.
Rationale
~~~~~~~~~
Comments
--------
* A short meeting every week is a must. "meeting" means a fair amount of the developers involved show up and there is a possibility to point at current problems. We could make a Full Meeting in the way proposed by link:MichaelPloujnikov[] additionally. But even one week can be dangerousely long in software developement, esp. when the dev process is so open and distributed.
* Personally, I am OK with Friday 21GMT
-- link:Ichthyostega[] [[DateTime(2007-06-17T00:29:54Z)]]
* Friday 21GMT is only good for me starting in two weeks until September. After that it's booked for the Winter. Thursday 21GMT would be better.
-- link:MichaelPloujnikov[] [[DateTime(2007-06-17T03:09:36Z)]]
* Also Ok for me
-- link:ct[] [[DateTime(2007-06-17T17:20:37Z)]]
* I think this (monthly) meeting should not be too frequent and address more global project directions, having a more frequent meeting rather makes it more unattractive and time consuming for slightly involved people. This does not means that we cant meet more often, but this should be acknowledged between people who work together on some features/subsystems. That would also have the benefit that such small groups are very flexible to meet, even daily on hot phases and able to acknowledge times when all members can attend. And later if timing is not that good only one person of such a subproject/group may attend to the monthly meeting and report/collect information there. Its likely hard enough to get many developers to meet at one time in month (when we have more devs than now :P)
-- link:ct[] [[DateTime(2007-06-17T17:20:37Z)]]
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,47 @@
NoBug logging flag hierachy
===========================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2008-04-05_
*Proposed by* link:ct[]
-------------------------------------
link:NoBug[] logging flag hierachy
----------------------------------
link:NoBug[] allows hierachical organization of logging flags. Propose a documentation/planning about the setup.
Description
~~~~~~~~~~~
Take a look at my draft at:
link:http://www.lumiera.org/gitweb?p=lumiera/ct;a=blob;f=doc/devel/nobug_flags.txt;h=74471e255e6ebfedb642e450bdfd3f79e346c600;hb=backend[NoBug_flags]
I've added the things I planning for the backend, others might add their own plans there too. So far this is an early draft, comments welcome.
Tasks
~~~~~
* Needs a file.c defining the common root see link:Lumiera/DesignProcess/GlobalInitialization[]
* Everyone needs to setup this hierachy by NOBUG_DEFINE_FLAG_PARENT (flag, parent_flag);
Pros
~~~~
When done right, logging control is much easier, just 'NOBUG_LOG=lumiera:DEBUG' would suffice.
Rationale
~~~~~~~~~
We need some easy way to control logging, later on when lumiera runs in beta tests it must be easy to tell a tester how to create useable debugging logs.
Comments
--------
cehteh will care for further integration
-- link:ct[] [[DateTime(2008-07-26T09:11:29Z)]]
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,83 @@
[grid="all"]
`------------`-----------------------
*State* _Idea_
*Date* _2009-01-14_
*Proposed by* link:ct[]
-------------------------------------
Normalized Device Coordinates
-----------------------------
AkhIL pointed me out to some blender problem and how renderman fixes that. We should use this too.
Description
~~~~~~~~~~~
Just snippet from IRC log:
------------------------------------------------------------
[15:09] <AkhIL> and I hope lumiera will use some resolution independend measuring for all parameters
[15:09] <cehteh> one can rotate where the node actually sits
[15:09] <AkhIL> like NDC
[15:09] <cehteh> or pass transistions through the renderpipe, make all effects transisition aware and apply them at the end
[15:10] <cehteh> the later is better but needs more efforts and some rethinking
[15:10] <cehteh> we will prolly support both in lumiera :)
[15:11] <AkhIL> in renderman's NDC for horizontal image with 4:3 aspect ration (-1.33,-1) is lower-left corner and (1.33,1) upper-right
[15:11] <cehteh> ah
[15:11] <AkhIL> so moving to different resolutions and different aspect ratios in renderman makes no problems
[15:11] <cehteh> well good point, we will measure in pixel but need to convert between them . using a float would be good to address pixels
[15:12] <cehteh> yes
[15:12] <cehteh> what stands NDC for?
[15:13] <AkhIL> Normalized Device Coordinates
[15:14] <cehteh> ok
[15:14] <AkhIL> so from -1 to 1 is a range by smallest image size
[15:15] <cehteh> yes sounds reasonable
[15:15] * cehteh adds a note to the lumiera design docs
[15:15] <cehteh> so far we dont do anything where it matters .. but that will come
[15:16] <AkhIL> when you move some logo to (0.8,-0.8) it will stay on screen even when you chenge resolution and image aspect ratio
[15:17] <AkhIL> all input images should be scaled to this range (-1,1) by smalles side
------------------------------------------------------------
Tasks
^^^^^
Pros
^^^^
Cons
^^^^
Alternatives
^^^^^^^^^^^^
Rationale
~~~~~~~~~
Comments
--------
One issue where I always assumed we'd need to define something of this sort is for proxy editing. Especially this is a problem in conjunction with masks. Basically, this means a bit more of "vector graphics". With film/video editing, this was rather unusual, but with the advent of more and new digital video/film formats it gets more and more important. Also, our considerations regarding time handling and quantisation to single frames somewhat fit into this line of thought. Up to now, rather the standard way of thinkin was to use a "project framerate" and a fixed resolution in pixels. But we certainly can do better.
-- Ichthyostega 18:09:50
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,88 @@
Design Process : Official Assembly Language
===========================================
[grid="all"]
`------------`-----------------------
*State* _Dropped_
*Date* _2008-08-01_
*Proposed by* link:PercivalTiglao[]
-------------------------------------
Official Assembly Language
--------------------------
I describe here an optimization that might have to be be taken into account at the design level. At very least, we should design our code with auto-vectorization in mind. At the most, we can choose to manually write parts of our code in assembly language and manually vectorize it using x86 SSE Instructions or !PowerPC !AltiVec instructions. By keeping these instructions in mind, we can easily achieve a large increase in speed.
Description
~~~~~~~~~~~
While the C / C++ core should be designed efficiently and as portable as possible, nominating an official assembly language or an official platform can create new routes for optimization. For example, the x86 SSE instruction set can add / subtract 16 bytes in parallel (interpreted as 8-bit, 16-bit, 32-bit, or 64-bit integers, or 32-bit/64-bit floats), with some instructions supporting masks, blending, dot products, and other various instructions specifically designed for media processing. While the specific assembly level optimizations should be ignored for now, structuring our code in such a way to encourage a style of programming suitable for SSE Optimization would make Lumiera significantly faster in the long run. At very least, we should structure our innermost loop in such a way that it is suitable for gcc's auto-vectorization.
The problem is that we will be splitting up our code. Bugs may appear on some platforms where assembly-specific commands are, or perhaps the C/C++ code would have bugs that the assembly code does not. We will be maintaining one more codebase for the same set of code. Remember though, we don't have to do assembly language now, we just leave enough room in the design to add assembly-level libraries somewhere in our code.
Tasks
~~~~~
* Choose an "Official" assembly language / platform.
* Review the SIMD instructions avaliable for that assembly language.
* For example, the Pentium 2 supports MMX instructions. Pentium 3 supports MMX and SSE Instructions. Early Pentium4s support MMX, SSE, and SSE2 instructions. Core Duo supports upto SSE4 instructions. AMD announced SSE5 instructions to come in 2009.
* Consider SIMD instructions while designing the Render Nodes and Effects architecture.
* Write the whole application in C/C++ / Lua while leaving sections to optimize in assembly later. (Probably simple tasks or a library written in C)
* Rewrite these sections in Assembly using only instructions we agreed upon.
Pros
~~~~
Assuming we go all the way with an official assembly language / platform...
* Significantly faster render and previews. (Even when using a high-level library like http://www.pixelglow.com/macstl/valarray/[macstl valarray], we can get 3.6x -- 16.2x the speed in our inner loop. We can probably expect greater if we hand-optimize the assembly)
Cons
~~~~
* Earlier architectures of that family will be significantly slower or unsupported
* Other architectures will rely on C / C++ port instead of optimized assembly
* Redundant Code
Alternatives
^^^^^^^^^^^^
* We only consider auto-vectorization -- GCC is attempting to convert trivial loops into common SSE patterns. Newer or Higher level instructions may not be supported by GCC. This is turned on http://gcc.gnu.org/projects/tree-ssa/vectorization.html[in GCC4.3 with specific compiler flags]
* We can consider assembly but we don't officially support it -- We leave the holes there for people to patch up later. Unofficial ports may come up, and maybe a few years down the line we can reconsider assembly and start to reimplement it down the road.
* Find a SIMD library for C/C++ -- Intel's ICC and http://gcc.gnu.org/onlinedocs/gcc-3.4.6/gcc/Vector-Extensions.html[GCC] both have non-standard extensions to C that roughly translate to these instructions. There is also the http://www.pixelglow.com/macstl/valarray/[macstl valarray library] mentioned earlier. Depending on the library, the extensions can be platform specific.
* Write in a language suitable for auto-vectorization -- Maybe there exists some vector-based languages? Fortran might be one, but I don't really know.
Rationale
~~~~~~~~~
I think this is one of those few cases where the design can evolve in a way that makes this kind of optimization impossible. As long as we try to keep this optimization avaliable in the future, then we should be good.
Comments
--------
* I have to admit that I don't know too much about SSE instructions aside from the fact that they can operate on 128-bits at once in parallel and there are some cache tricks involved when using them. (you can move data in from memory without bringing in the whole cache line). Nonetheless, keeping these assembly level instructions in mind will ease optimization of this Video Editor. Some of the instructions are high-level enough that they may effect design decisions. Considering them now while we are still in early stages of development might prove to be advantagous. Optimize early? Definitely not. However, if we don't consider this means of optimization, we may design ourselves into a situation where this kind of optimization becomes impossible.
* I don't think we should change any major design decisions to allow for vectorization. At most, we design a utility library that can be easily optimized using SIMD instructions. Render Nodes and Effects can use this library. When this library is optimized, then all Render Nodes and Effects can be optimized as well. -- link:PercivalTiglao[] [[DateTime(2008-08-01T16:12:11Z)]]
* Uhm, the Lumiera core (backend, proc, gui) doesn't do any numbercrunching. This is all delegated to plugins (libgavl, effects, encoders). I think we don't need any highly assembler/vector optimized code in the core (well, lets see). This plugins and libraries are somewhat out of our scope and thats good so, the people working on it know better than we how to optimize this stuff. It might be even worthwile to try if when we leave all vectorization out, if then the plugins can use the vector registers better and we gain overall performance!
-- link:ct[] [[DateTime(2008-08-03T02:27:14Z)]]
* Another idea about a probably worthwhile optimization: gcc can instumentate code for profileing and then do arc profileing and build it a second time with feedback what it learnd from the profile runs, this mostly affects branch prediction and can give a reasonable performance boost. If somone likes challenges, prepare the build system to do this:
. build it with -fprofile-arcs
. profile it by running ''carefully'' selected benchmarks and tests.
. rebuild it again this time with -fbranch-probabilities
. PROFIT
-- link:ct[] [[DateTime(2008-08-03T02:27:14Z)]]
* I've discussed general ideas around, and I agree now that "core Lumiera" is not the place to think of these kinds of optimizations. So I'll just move this over to dropped. -- link:PercivalTiglao[] [[DateTime(2008-08-04T18:33:58Z)]]
''''
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,54 @@
Design Process: Builder within the Proc-Layer
=============================================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2008-03-06_
*Proposed by* link:Ichthyostega[]
-------------------------------------
Builder within the Proc-Layer
-----------------------------
One of the core ideas of the Proc-Layer (as being implemented since summer '07 by Ichthyo) is the use of the Builder-pattern to achieve a separation between high-level view and low-level view.
Description
~~~~~~~~~~~
The Proc-Layer differentiates into a high-level view, which models the properties of the problem domain (manipulating media objects), and a low-level model, which is a network of render nodes and will be optimized for processing efficiency.
In between sits the Builder, which is triggered on all important/relevant changes to the high-level model.
The Builder inspects the current state of this high-level model and, driven by the actual objects and their configuration, creates a corresponding representation within the low-level model, which is then hot-swapped into the renderer.
In the course of this building process, all necessary decisions are taken, disabled features and impossible connections are detected and left out, and all more elaborate or macro-like structures (e.g. meta clips) are broken down into simple building blocks, which can be implemented 1:1 by render nodes in the low-level model.
The configuration of the high-level model is deliberately very open; the builder doesn't impose much limitations, rather he reflects the found configuration down into the low-level model using generic rules.
Pros
^^^^
* Separation, decoupling
* Architectural approach instead of just hacking away...
Cons
^^^^
* Increases the overall complexity
* More work to be done to get a minimal system implemented
Rationale
~~~~~~~~~
This design was chosen as a direct consequence of the problems encountered in the Cinelerra-2 codebase.
* Separating this way allows us to take on different viewpoints on what is "good" and "efficient".
* In the low-level view simplicity and efficiency of computation is the main criterion.
* Whereas in the high-level view a good modeling of the problem domain and maximum flexibility is preferable.
* The high-level view is taken out of the critical code path, allowing for advanced and even experimental technologies without endangering the whole application's usability. In the low-level realm, 'speed' is measured in ms, whereas in the high-level domain, speed is rather measured in 100ms.
* The separation creates distinct interfaces and allows people with very different skill sets to work in parallel at the various levels of the App.
Conclusion
----------
This proposal reflects a distinct approach taken right from start.
Marked 'final' at October.2008 developer meeting
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,102 @@
[grid="all"]
`------------`-----------------------
*State* _Draft_
*Date* _2008-08-16_
*Proposed by* link:Ichthyostega[]
-------------------------------------
High-level model in the Proc-Layer
----------------------------------
The purpose of this link:DesignProcess[] entry is to collect together informations regarding the design and structure of the high-level model of Lumiera's Proc-Layer. Most of the information presented here is already written down somewhere, in the http://www.lumiera.org/wiki/renderengine.html#SessionOverview[Proc-Layer TiddlyWiki] and in source code comments. This summer, we had quite some discussions regarding meta-clips, a *container* concept and the arrangement of tracks, which further helped to shape the model as presented here.
While the low-level model holds the data used for carrying out the actual media data processing (=rendering), the high-level model is what the user works upon when performing edit operations through the GUI (or script driven in "headless"). Its building blocks and combination rules determine largely what structures can be created within the http://www.lumiera.org/wiki/renderengine.html#Session[Session].
On the whole, it is a collection of http://www.lumiera.org/wiki/renderengine.html#MObjects[media objects] stuck together and arranged by http://www.lumiera.org/wiki/renderengine.html#Placement[placements].
Basically, the structure of the high-level model is is a very open and flexible one &mdash; every valid connection of the underlying object types is allowed &mdash; but the transformation into a low-level node network for rendering follows certain patterns and only takes into account any objects reachable while processing the session data in accordance to these patterns. Taking into account the parameters and the structure of these objects visited when building, the low-level render node network is configured in detail.
The fundamental metaphor or structural pattern is to create processing *pipes*, which are a linear chain of data processing modules, starting from an source port and providing an exit point. http://www.lumiera.org/wiki/renderengine.html#Pipe[Pipes] are a _concept or pattern,_ they don't exist as objects. Each pipe has an input side and an output side and is in itself something like a Bus treating a single http://www.lumiera.org/wiki/renderengine.html#StreamType[media stream] (but this stream may still have an internal structure, e.g. several channels related to a spatial audio system). Other processing entities like effects and transitions can be placed (attached) at the pipe, resulting them to be appended to form this chain. Optionally, there may be a *wiring plug*, requesting the exit point to be connected to another pipe. When omitted, the wiring will be figured out automatically.
Thus, when making an connection _to_ a pipe, output data will be sent to the *source port* (input side) of the pipe, wheras when making a connection _from_ a pipe, data from it's exit point will be routed to the destination. Incidentally, the low-level model and the render engine employ _pull-based processing,_ but this is rather of no relevance for the high-level model.
image:images/high-level1.png[]
Normally, pipes are limited to a _strictly linear chain_ of data processors ("*effects*") working on a single data stream type, and consequently there is a single *exit point* which may be wired to an destination. As an exception to this rule, you may insert wire tap nodes (probe points), which explicitly may send data to an arbitrary input port; they are never wired automatically. It is possible to create cyclic connections by such arbitrary wiring, which will be detected by the builder and flagged as an error.
While pipes have a rather rigid and limited structure, it is allowed to make several connections to and from any pipe &mdash; even connections requiring an stream type conversion. It is not even necessary to specify ''any'' output destination, because then the wiring will be figured out automatically by searching the context and finally using some general rule. Connecting multiple outputs to the input of another pipe automatically creates a *mixing step* (which optionally can be controlled by a fader). Several pipes may be joined together by a *transition*, which in the general case simultaneously treats N media streams. Of course, the most common case is to combine two streams into one output, thereby also mixing them. Most available transition plugins belong to this category, but, as said, the model isn't limited to this simple case, and moreover it is possible to attach several overlapping transitions covering the same time interval.
Individual Media Objects are attached, located or joined together by *Placements* . A http://www.lumiera.org/wiki/renderengine.html#Placement[Placement] is a handle for a single MObject (implemented as a refcounting smart-ptr) and contains a list of placement specifications, called http://www.lumiera.org/wiki/renderengine.html[LocatingPin]. Adding an placement to the session acts as if creating an _instance_ . (it behaves like a clone in case of multiple placements of the same object). Besides absolute and relative placement, there is also the possibility of a placement to stick directly to another MObject's placement, e.g. for attaching an effect to a clip or to connect an automation data set to an effect. This _stick-to placement_ creates sort of a loose clustering of objects: it will derive the position from the placement it is attached to. Note that while the length and the in/out points are a _property of the MObject_ , it's actual location depends on how it is _placed_ and thus can be maintained quite dynamically. Note further that effects can have an length on their own, thus by using these attachement mechaics, the wiring and configuration within the high-level model can be quite time dependant.
image:images/high-level2.png[]
Actually a *clip* is handled as if it was comprised of local pipe(s). In the example shown here, a two-channel clip has three effects attached, plus a wiring plug. Each of those attachments is used only if applicable to the media stream type the respective pipe will process. As the clip has two channels (e.g. video and audio), it will have two *source ports* pulling from the underlying media. Thus, as showed in the drawing to the right, by chaining up any attached effect applicable to the respective stream type defined by the source port, effectively each channel (sub)clip gets its own specifically adapted processing pipe.
Example of an complete Session
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
image:images/high-level3.png[]
The Session contains several independent http://www.lumiera.org/wiki/renderengine.html#EDL[EDL]s plus an output bus section ( *global Pipes* ). Each EDL holds a collection of MObjects placed within a *tree of tracks* .
Within Lumiera, tracks are a rather passive means for organizing media objects, but aren't involved into the data processing themselves. The possibility of nesting tracks allows for easy grouping. Like the other objects, tracks are connected together by placements: A track holds the list of placements of its child tracks. Each EDL holds a single placement pointing to the root track.
As placements have the ability to cooperate and derive any missing placement specifications, this creates a hierarchical structure throughout the session, where parts on any level behave similar if applicable. For example, when a track is anchored to some external entity (label, sync point in sound, etc), all objects placed relatively to this track will adjust and follow automatically. This relation between the track tree and the individual objects is especially important for the wiring, which, if not defined locally within an MObject's placement, is derived by searching up this track tree and utilizing the wiring plug locating pins found there, if applicable. In the default configuration, the placement of an EDL's root track contains a wiring plug for video and another wiring plug for audio. This setup is sufficient for getting every object within this EDL wired up automatically to the correct global output pipe. Moreover, when adding another wiring plug to some sub track, we can intercept and reroute the connections of all objects creating output of this specific stream type within this track and on all child tracks.
Besides routing to a global pipe, wiring plugs can also connect to the source port of an *meta-clip*. In this example session, the outputs of EDL-2 as defined by locating pins in it's root track's placement, are directed to the source ports of a http://www.lumiera.org/wiki/renderengine.html#VirtualClip[meta-clip] placed within EDL-1. Thus, within EDL-1, the contents of EDL-2 appear like a pseudo-media, from which the (meta) clip has been taken. They can be adorned with effects and processed further completely similar to a real clip.
Finally, this example shows an *automation* data set controlling some parameter of an effect contained in one of the global pipes. From the effect's POV, the automation is simply a http://www.lumiera.org/wiki/renderengine.html[ParamProvider], i.e a function yielding a scalar value over time. The automation data set may be implemented as a bézier curve, or by a mathematical function (e.g. sine or fractal pseudo random) or by some captured and interpolated data values. Interestingly, in this example the automation data set has been placed relatively to the meta clip (albeit on another track), thus it will follow and adjust when the latter is moved.
Tasks
^^^^^
Lagely, the objects exist in code, but lots of details are missing
* Effect-MObojects are currently just a stub
* the actual data storage holding the placements within the EDL has to be implemented
* has to work out how to implement the ref to a param provider
* similar, there is a memory management issue regarding relative placement
* the locating pins are just stubs and query logic within placement needs to be implemented
Pros
^^^^
* very open and modular, allows for creating quite dynamic object behaviour in the sessison
* implementation is rather simple, because it relies on a small number of generic building blocks
Cons
^^^^
* very tightly coupled to the general design of the Proc-Layer. Doesn't make much sense without a Builder and a Rules-based configuration
* not all semantic constraints are enforced structurally. Rather, it is assumed that the builder will follow certain patterns and ignore non conforming parts
* allows to create patterns which go beyond the abilities of current GUI technology. Thus the interface to the GUI layer needs extra care an won't be a simple as it could be with a more conventional approach. Also, the GUI needs to be prepared that objects can move in response to some edit operation.
Alternatives
^^^^^^^^^^^^
_Use the conventional approach:_ hard-wire a _reasonable simple_ structure and code the behaviour of tracks, clips, effects and automation explicitly, providing separate code to deal with each of them. Use the hard-wired assumption that a clip consists of "video and audio". Hack in any advanced features (e.g. support multi-camera takes) as GUI macros. Just don't try to support things like 3D video and spatial audio (anything beyond stereo and 5.1). Instead, add a global "node editor" to make the geeks happy, allowing to wire everything to everything, just static and global of course.
Rationale
~~~~~~~~~
Ichthyo developed this design because the goal was to start out with the level of flexibility we know from Cinelerra, but try to do it considering all consequences right from start. Besides, the observation is that the development of non-mainstream media types like steroscopic (3D) film and really convincing spatial audio (beyond the ubiquitous "panned mono" sound) is hindered not by technological limitations, but by pragmatism preferring the "simple" hard wired approach.
Comments
--------
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,139 @@
[grid="all"]
`------------`-----------------------
*State* _Idea_
*Date* _2008-03-06_
*Proposed by* link:Ichthyostega[]
-------------------------------------
Placement Metaphor used within the high-level view of Proc-Layer
----------------------------------------------------------------
besides the [wiki:self:../ProcBuilder Builder], one of the core ideas of the Proc-Layer (as being currently implemented by Ichthyo) is to utilize ''Placement'' as a single central metaphor for object association, location and configuration within the high-level model. The intention is to prefer ''rules'' over fixed ''values.'' Instead of "having" a property for this and that, we query for information when it is needed.
The proposed use of '''Placement''' within the proc layer spans several, closely related ideas:
* use the placement as a universal means to stick the "media objects" together and put them on some location in the timeline, with the consequence of a unified and simplified processing.
* recognize that various ''location-like'' degrees of freedom actually form a single ''"configuration space"'' with multiple (more than 3) dimensions.
* distinguish between ''properties'' of an object and qualities, which are caused by "placing" or "locating" the object in ''configuration space''
- ''propetries'' belong to the object, like the blur value, the media source file, the sampling/frame rate of a source
- ''location qualities'' exist only because the object is "at" a given location in the graph or space, most notably the start time, the output connection, the layering order, the stereoscopic window depth, the sound pan position, the MIDI instrument
* introduce a ''way of placement'' independent of properties and location qualities, describing if the placement ''itself'' is ''absolute, relative or even derived''
* open especially the possibility to ''derive'' parts of the placement from the context by searching over connected objects and then up the track tree; this includes the possibility of having rules for resolving unspecified qualities.
Description
~~~~~~~~~~~
The basic idea is to locate ''Media Objects'' of various kinds within a ''configuration space''. Any object can have a lot of different qualities, which may partially depend on one another, and partially may be chosen freely. All these various choices are considered as ''degrees of freedom'' -- and defining a property can be seen as ''placing'' the object to a specific parameter value on one of these dimensions. While this view may be bewildering at first sight, the important observation is that in many cases we don't want to lock down any of those parameters completely to one fixed value. Rather, we just want to ''limit'' some parameters.
To give an example, most editing applications let the user place a video clip at a fixed time and track. They do so by just assigning fixed values, where the track number determines the output and the layering order. While this may seem simple, sound and pragmatic, indeed this puts down way to much information in a much to rigid manner for many common use cases of editing media. More often than not it's not necessary to "nail down" a video clip -- rather, the user wants it to start immediately after the end of another clip, it should be sent to some generic output and it should stay in the layering order above some other clip. But, as the editing system fails to provide the means for expressing such relationships, we are forced to
work with hard values, resort to a bunch of macro features or even compensate for this lack by investing additional resources in production organisation (the latter is especially true for building up a movie sound track).
On the contrary, using the '''Placement''' metaphor has the implication of switching to a query-driven approach.
* it gives us one single instrument to express the various kinds of relations
* the ''kind of placement'' becomes an internal value of the ''placement'' (as opposed to the object)
* some kinds of placement can express rule-like relations in a natural fashion
* while there remains only one single mechanism for treating a bunch of features in a unified manner
* plugins could provide exotic and advanced kinds of placement, without the need of massively reworking the core.
When interpreting the high-level model and creating the low-level model, Placements need to be ''resolved'',
resulting in a simplified and completely nailed-down copy of the session contents, which this design calls »the '''Fixture'''«
Media Objects can be placed
* fixed at a given time
* relative to some reference point given by another object (clip, label, timeline origin)
* as plugged into a specific output pipe (destination port)
* as attached directly to another media object
* to a fixed layer number
* layered above or below another reference object
* fixed to a given pan position in virtual sound space
* panned relative to the pan position of another object
Tasks
^^^^^
* currently just the simple standard case is drafted in code.
* the mechanism for treating placements within the builder is drafted in code, but needs to be worked out to see the implications more clearly
* while this design opens endless possibilities, it is not clear how much of it should be visible through the GUI
Pros
^^^^
* with just one concept, we get a lot of issues right, which many conventional approaches fail to solve satisfactory
* one grippy metaphor instead of several special treatments
* includes the simple standard case
* unified treatment
* modular and extensible
* allows much more elaborate handling of media objects then the conventional approach, while both the simple standard case and the elaborate special case are "first class citizens" and completely integrated in all object treatment.
Cons
^^^^
* difficult to grasp, breaks with some habits
* requires a separate resolution step
* requires to ''query'' for object properties instead of just looking up a fixed value
* forces the GUI to invent means for handling object placement which may go beyond the conventional
* can create quite some surprises for the user, especially if he doesn't care to understand the concept up front
Alternatives
^^^^^^^^^^^^
Use the conventional approach
* media objects are assigned with fixed time positions
* they are stored directly within a grid (or tree) of tracks
* layering and pan are hard wired additional properties
* implement an additional auto-link macro facility to attach sound to video
* implement a magnetic snap-to for attaching clips seamless after each other
* implement a splicing/sliding/shuffling mode in the gui
* provide a output wiring tool in the GUI
* provide macro features for this and that....
. (hopefully I made clear by now ''why'' I don't want to take the conventional approach)
Rationale
~~~~~~~~~
There is a conventional way of dealing with those properties, which stems of the use of analogue hardware, especially multitrack tape machines. This conventional approach constantly creates practical problems, which could be avoided by using the placement concept. This is due to the fact, that the placement concept follows the natural relations of the involved concepts, while the conventional approach was dictated by technological limitations.
* the ususal layering based on tracks constantly forces the user to place clips in a unnatural and unrelated fashion and tear apart clips which belong closely together
* the conventional approach of having a fixed "pan control" in specialized "audio tracks" constantly hinders the development of more natural and convincing sound mixing. It favors a single sound system (intensity based stereophony) for no good reason.
* handling of stereoscopic (3D) video/film is notoriously difficult within the conventional, hard wired approach
* building more elaborate sound scapes and sound design is notoriously difficult to maintain, because the user is forced to use hidden "side chains", magic rules and re-build details in external applications, because of the lack of flexible integration of control data alongside with the main data.
The high-level model is close to the problem domain, it should provide means to express the (naturally complex) relationships between media objects. Using an abstract and unified concept is always better then having a bunch of seemingly unrelated features, even if they may be more easy to grasp for beginners. Moreover, the Placement concept deliberately brings in an rule based approach, which well fits into the problem domain. Finally, there is sort-of a visionary aspect involved here: Ichthyo thinks that nowadays, after image and sound are no longer bound to physical media, there is potential for new workflows to be discovered, and the Placement concept could be an extension point for such undertakings.
Comments
--------
Placement Metaphor
~~~~~~~~~~~~~~~~~~
Re:
"Finally, there is sort-of a visionary aspect involved here:
Ichthyo thinks that nowadays, after image and sound are no longer bound to physical media, there is potential for '''new workflows''' to be '''discovered''',
and the '''Placement concept''' '''''could be''''' an '''extension point''' for such undertakings."
New workflows will not just be '''discovered''', but they will be able to be '''recorded, analysed, templated, automated, and integrated''' into the full workflow process.
This will free up a greater proportion of time for the "finishing" processes of projects.
"The Placement concept 'could be' an extension for such undertakings" is very likely to be an understatement as it is this which '''''will be''''' what makes these undertakings possible, because it enables the gathering, use, and decision rules based on these parameters.
This feature/capability is likely to stamp the Lumiera project as a flagship benchmark in more ways than one, for some time.
. --link:Tree[][[DateTime(2008-08-23T12:54:00NZ)]].
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,72 @@
[grid="all"]
`------------`-----------------------
*State* _Idea_
*Date* _2007-06-07_
*Proposed by* link:ct[]
-------------------------------------
Render Optimizer
----------------
Render only parts of a frame which are necessary for the Output;
Optimize render pipeline for efficiency
Description
~~~~~~~~~~~
This Idea is just stored here for later reference/implementation.
Effects give some information on which data their output depends (like transitions, temporal dependencies, color/alpha etc) and what the operation costs. Based on this information we optimize the render pipeline, for example if the output is a zoom, then we only need to calculate the parts of a frame which will be viewable in the output (plus some more dependencies, like blur has radius and so on). Further in some cases it might be favorable to reorder some effects for the actual render process, as long it would produce the same output as the original sequence of effects.
Tasks
^^^^^
Pros
^^^^
Cons
^^^^
Alternatives
^^^^^^^^^^^^
Rationale
~~~~~~~~~
Comments
--------
Possible classification for video filters:
1. The filter only changes the color of each pixel in the same way
2. The filter deforms the image but leaves the color
3. The filter makes complex things. The only additional hint it can export is the
number of referenced past frames, if such a limit exists (sometimes it doesn't).
Filters of type 1 and type 2 never use any previous frames, and are strictly
one frame in - one frame out. Filters of type 1 can always be swapped with filters of type 2, the output
is the same. All other filters cannot be swapped in general.
The good news is, that:
1. All commonly used filters are either type 1 or type 2
(type 3 are more the fun effects)
2. Filters of type 2 are colormodel agnostic
3. If a filter of type 1 makes only linear transformations of the color vectors (new_color = matrix * old_color),
the matrix can be transformed from e.g. RGB to YUV, so these filters can always work in both colorspaces directly
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,84 @@
Design Process: Repository Setup
================================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2007-06-09_
*Proposed by* link:ct[]
-------------------------------------
Repository Setup
----------------
Here we describe the Directory hierachy and how the git repository are set up.
Description
~~~~~~~~~~~
------------------------------------------------------------
far incomplete
./admin/ # administrative scripts, tools etc
./doc/user/ # user docs
./doc/devel/ # development docs
./uml/ # uml model
./tests/
./tests/bugs/
./wiki/ # tiddlywiki for semi-persistent documentation
./oldsrc/ # cinelerra2 sources
./src/$COMPONENT/ # new source, per component (lets see how bouml generation works for this)
------------------------------------------------------------
The cinelerra2 sources are put into oldsrc on a per-case base.
We want to use the new GIT feature of "Superprojects and Submodules" when it is ready for general use.
Then we will transform several subtrees into separate GIT repos which will be linked to from the main
Project (then called the "Superproject") as submodules.
Tasks
~~~~~
Pros
^^^^
* Because its a text-like structure, it is partially self-documenting
* GIT is flexible and with the planned submodules it can be separated in chunks of manageable size if necessary
Cons
^^^^
* Can get large and confusing
* has no real "portal" or entrance point for people wanting to join
Alternatives
^^^^^^^^^^^^
Rationale
~~~~~~~~~
Every important document, draft, text and code (including) prototypes should be checked into one SCM (or a set of related SCMs). This repo should be "almost everything" you need for the project. Because we try to use a distributed developement model, every dev can/should have his own copy and fed his changes back.
This 'Repository aproach' avoids the problems of a central infrastructure and helps cut down project management time. Basically, every dev is responsible himself for getting every important piece of information added into "the general view of matters" in a consitent way.
Comments
--------
- Basically the structure is just fine.
- Maybe add a "pastebin" somewhere in the dev-documentation area?
- I would add the source tree roots at level 2, so we can have several submodules here:
* oldsrc
* cin3
* prototype
link:Ichthyostega[]
- Draft now.
- Yes I left source dirs out but this sounds fine, note that with git, there is no problem to reorganize the repo (in contrast to CVS) later. We can fix things afterward when we find better ways. -- link:ct[] [[DateTime(2007-06-17T17:36:46Z)]]
- Whats prototype for? won't that be better a branch? -- link:ct[] [[DateTime(2007-06-17T22:04:39Z)]]
- I just wanted to show there could be additional things beside the main tree (later to be separete submodules). The example was meant as a classical throwaway prototype. But I agree, in our case we just start hacking at the new tree and make feature/tryout/prototype branches...
- The point I wanted to make is: every directory 2 levels deep in the source tree, e.g. /src/cinelerra3 or /src/oldsrource should be a completely self-contained tree which can be built without needing anything of the rest of the repo. Thats an prerequisite for moving to Submodules IMHO. But you seem rather to put the sourcetree-roots 1 level deep. As we have just two trees at the moment (and can easily reorganize), I have no objections against this. The only point I really care is to try to keep the source tree self-contained without any dependencies to the rest of the "design GIT" (because of this Superprojects-Submodules thing...) -- link:Ichthyostega[] [[DateTime(2007-06-17T23:45:06Z)]]
- We could make the trees deeper than one level, I didn't intended 1-level depth. but also be careful with that not to make it too complex. While I am not sure if we want a complete oldsrc, that just adds weight and confusion for now (lets see). Neither I am fully decided about the hierachy in /src (want /libs /plugins or /src/libs /src/plugins or /src/render/plugins? name it rather 'effects' than 'plugins'?). While I am quite sure that I want to separate /oldssrc and /src quite much (in /src should only be new stuff or stuff which is carefully reviewed, with known license and author). -- link:ct[] [[DateTime(2007-06-18T08:38:43Z)]]
- I made this proposal 'final' now further details are likely better worked out in the git repository (and we already started to define things there) see ./admin/treeinfo.sh -- link:ct[] [[DateTime(2007-06-27T16:01:52Z)]]
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,176 @@
[grid="all"]
`------------`-----------------------
*State* _Idea_
*Date* _2009-01-30_
*Proposed by* link:Ichthyostega[]
-------------------------------------
Roadmap up to Lumiera 1.0
-------------------------
As the very basic architecture questions seem to settle down now, it seems to be time
to create a first Roadmap skeleton for the project. A specific approach is proposed:
we should define criteria allowing us to judge when we've reached a certain level
plus we should define features to be ''excluded'' at a certain level. We should
''not'' define ''Features'' to go into a certain level.
''the following text is copied from the Lumiera http://issues.lumiera.org/roadmap[Trac]''
Description: Milestones up to first Release
-------------------------------------------
Milestone integration: cooperating parts to render output
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For this milestone to be reached, the basic subsystems of Lumiera need to be designed, the most important interfaces between
the parts of the application exist in a first usable version, and all the facilities on the rendering code path are provided
at least in a dummy version and are '''capable of cooperating to create output'''. Based on Lumiera's design, this also means
that the basic frame cache in the backend is working. And it means that a media asset and a clip can be added to the internal
session representation, which is then handed over to the builder. Probably it's a good idea to include basic playback/display
of the rendered frames within the GUI while they are created.
Notable features ''not'' included
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* no saving and loading of sessions
* no manipulation of objects though the GUI (just display of the session)
* no adding of arbitrary media or inclusion of arbitrary plugins
* no media stream type conversions
* no playback of sound
Milestone alpha: operations accessible for users
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For this milestone to be reached, the fundamental operations you'd expect from a video editing software
can be '''accessed by a user''' (not a developer!). This means that the basic distribution/release model
is set up, a ''user'' is able to compile Lumiera or install an existing package. Moreover a user should
be able to create/open a session file (without any quirks), add some media (probably only a limited number
of media types will be supported), and then perform the most basic operations like positioning, trimming,
copying, playing and finally rendering the timeline integration phase is closed and Lumiera has reached alpha level.
Notable features ''not'' included
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* advanced configuration
* complex/compound projects
* meta-clips, multi-cam, advanced multi channel sound handling
* any asset management beyond the very basic media handling
* advanced wiring and routing
* media stream type conversions
* adding of arbitrary plugins
* proxy editing
* configurable GUI
Milestone beta: usable for real work
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For this milestone to be reached, users should be able to '''get real work done with Lumiera'''. Especially,
a basic asset management should be in place, Lumiera should be able to handle the most common media types,
the user should be able to do common editing tasks (adding, trimming, rolling, splicing copying, moving)
both by direct manipulation within the timeline, as by using the conventional two-viewer setup with
in/out points. Moreover, it should be possible to attach effects (probably still just some limited kinds
of effects), apply simple transitions and control the layering and overlay mode on output. Similarily,
the elementary routing capabilities and the handling of multiple sequences should be suported (probably
still with limitations). The framework for automation handling should be in place, while there may be
still limitations on automation/keyframe editing. Having about this feature set indicates, that Lumiera
entered the beta phase.
Notable features ''not'' included
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* configuration editing through the GUI
* advanced routing, full support of virtual clips
* arbitrary wiring, macro effects or similar
* view or editing of individual nodes
* arbitrary nesting and navigation within projects
* full multi-cam support, support for non-standard image/sound types
* plugin libraries and packaging
* interfaces for plugin authors are not frozen!
* fully configurable GUI
* full support for proxy editing everywhere
* extended workflow features (like "export to DVD")
Milestone release-1.0: usable for productions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For this milestone to be reached, Lumiera should be a '''reliable tool for productions with a deadline'''.
Lumiera 1.0 is not the ''dream machine,'' but users should be able to do simple productions. We should be
able to promote Lumiera to professionals without remorse. The GUI should be mature, undo/recovery should
work airtight, performance should be ok-ish and output quality without any glitches. Plugin authors
can rely on stable interfaces and backwards compatibility from now on, up to release 2.0
Notable features ''not'' included
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* bangs and whistles
* render farm, distributed execution, hardware acceleration
* MIDI and adding of non-standard media kinds yet-to-appear
* advanced media management, full support for subtitling
* support for special workflow features
* foreign session import/export
* collaboration features
* artificial intelligence
Tasks
+++++
Please review and discuss this proposal, consider if it's of any use setting it up this way...
Pros
^^^^
* doesn't hinder us
* helps to avoid controversies
Cons
^^^^
The contents of the roadmap aren't very specific and thus aren't of much help
for determining ''what has to be done next.''
Alternatives
^^^^^^^^^^^^
* Create a complete development plan and derive the roadmap from it.
* Use no roadmap at all beyond the next forseeable minor release.
Rationale
~~~~~~~~~
We deliberately don't set any date schedule. Releases happen ''when they are ready.'' We may decide to do sprints on
a short-term timeframe, but it doesn't help promising things we can't calculate for sure. In an commercial setup, you
have to commit to features and dates, but you also control a certain budget, which gives you the means to ''make things
happen.'' In Open Source development, we've to be patient and wait for the things to happen ;-)
Thus the proposal is to set up just a very coarse and almost self-evident roadmap skeleton, but to discuss and define
criteria up-front, which allow us to determine when we've actually reached a given level. Moreover, the proposal is
to add a list of features which can be savely ''excluded'' from the given milestone
Comments
--------
Looks ok to me, the dust is settling ad we can now think about such a roadmap. Some goals might be shifted between Milestones on collaborative decision, but so far I agree. Otherwise I'd like to keep the issue tracker focus on work to be done, it shall not become a wishlist tool for non developers, any such things are deliberately left out.
-- link:ct[] 2009-01-31
In ticket #4 (debian packaging) i explained that packaging might be optional for 'alpha' and should be moved to 'beta'.
-- link:ct[] 2009-02-01
OK, we should make the packaging optional. I think, for alpha the criterion is "accessability for users". If compiling remains so easy as it is now (compared with other media related projects), than this shouldn't be a barrier.
-- link:Ichthyostega[] 2009-02-01
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,86 @@
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2008-07-26_
*Proposed by* link:ct[]
-------------------------------------
Scripting Language
------------------
Add support for the 'Lua' scripting language in Lumiera.
Description
~~~~~~~~~~~
We talked since the beginning about that we want to have scripting support within Lumiera. Some weeks ago we did a non formal decision on IRC to bless Lua as 'official' scripting language.
Tasks
^^^^^
* Investigate Lua's C bindings and integrate it
* It will attach to the link:Plugin/Interface[] System cehteh is working on
Pros
^^^^
* Small, Lightweight
* Simple Syntax
* reasonable fast
* incremental GC (predictable performance)
Cons
^^^^
* Lua has only a moderate library compared to Python for example, though I don't think this is a problem for our purpose
Alternatives
^^^^^^^^^^^^
There are quite a lot other scripting language which would be suitable. When it makes sense these could be bound later too.
Rationale
~~~~~~~~~
Wee need scripting yet alone for driving the testsuite soon. Later on scripting might be used to customize workflows and other application internals. Further one might implement a high level / batch interface to lumiera to give it a scripting interface similar to link:AviSynth[].
Comments
--------
To make it more clear: the intention is to have the scripts call into well defined Intefaces / API functions, which are accessed via the plugin system. It is ''not'' intended to have just arbitrary scripts anywhere and everywhere, but -- on the other hand -- all important functionality which can be activated via the GUI should be similarly accessible via the scripting APIs without restrictions. So, as Python and Ruby and Perl are popular scripting language, we'll have the neccessary bindings sooner or later.
Beyond that, it is possible to have some ''extension points'' where configuration is added to the application in a well defined manner by scripts. These scripts provide for some basic customisation and can even add some of the important higher-level features. With respect to these, the idea of this proposal is to have one ''required scripting language'', so scripts in this language are guaranteed to work and may be used to add essential features. I consider Lua to be an almost perfect fit for this purpose.
-- link:Ichthyostega[] [[DateTime(2008-07-27T22:36:52Z)]]
Well my intention is to make Lua a real first class binding where any internal interface gets exported and would be useable from scripting, that contradicts your limitation to make is only an extension language; but hold on:
* this internal bindings should be announced as volatile and do-not-use for anyone external and being 'unsupported'
* we can build a 'exported' scripting api on top of that which then will be the official way to control Lumiera from a high level script.
* the reason why I want to make it this way is that it will become possible to implement tests, mock-ups and prototypes of some functionality in Lua. I predict that this will help us greatly on development when we progress further. Things which usefulness is doubtful can be prototyped and tried out in a afternoon rather than a week. That makes it possible to 'see' things which otherwise would be rejected because they are not worth a try.
* some very custom specializations (studio workflows) would be easier integrateable
* of course if this is used wrong it can really damage the health of the system, but I think this is oblivious and very explicit, there are easier ways to damage it, just whack your computer with a sledgehammer for example.
* there might some lazyness to keep prototypes in Lua instead reimplement them properly in C/C++, well IMHO that's OK, at some point need will arise to make it proper, if the Lua implementation is insufficient, but that's arguable.
-- link:ct[] [[DateTime(2008-07-30T16:22:32Z)]]
I have no problems using Lua. It is proven in the industry, well supported, fast, efficient, high level and designed for this purpose. My only "complaint" is that Lua isn't my pet language (Scheme). And that really isn't a complaint at all.
-- link:PercivalTiglao[] [[DateTime(2008-07-28T19:56:25Z)]]
I think Python should be reconsidered: it's given that all languages in this class are powerful at what they do, however python has particularly well developed libraries and is used as the scripting language in the main raster (GIMP), vector (Inkscape) and 3D (Blender, link:PythonOgre[], PyCrystal) Apps. Combinations of these Apps are all going to be working in a stack in professional production, so the fact that all the others use python makes a more persuasive case for adoption than any micro-benefit in performance or features that can be found between Python/Ruby/Perl/Lua etc. Python is also used extensively in RedHat and Ubuntu admin scripting where most professional deployments will be. If the goal is to truly get a professional setup, i.e. to get this into professional production houses, then I think having a single language from OS admin the whole way through the stack is a massive gain for the types of users who will be using it. I personally prefer Ruby. Naturally it's your decision to make, all the best, we are looking forward to alphas and betas in the future
-- mytwocents
This proposal is about the ''required'' scripting language, i.e. when accepted, Lua will be a necessary prerequisite for running Lumiera. This doesn't rule out the ''use'' of other scripting languages. We strive at having clean interfaces, thus it shouldn't be much of a problem to create Python bindings. And given the popularity of Python, I guess it won't be long until we have some Python bindings. But ''requiring'' Python would mean having a Python runtime in memory most of the time -- for such Lua obviously is a better choice, because it's much more lightweight and minimalistic.
-- link:Ichthyostega[] [[DateTime(2008-09-30T02:17:08Z)]]
Conclusion
----------
Lua is '''accepted''' as the required scripting language by October.2008 dev meeting.
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,70 @@
[grid="all"]
`------------`-----------------------
*State* _Parked_
*Date* _2007-06-13_
*Proposed by* link:ct[]
---------------------------------
Skills Collection
-----------------
Make a page where people can tell in which areas they are willing to support others.
Description
~~~~~~~~~~~
Some Page should list different things needed for working on the project and users should attach themself when they offer support for it. This is meant that people who run into problems know who to ask. In contrast this is not meant like these Skill pages on Sourceforge or such. I don't like this rating and posing system. We let people assing themself to skill and not skills to people and there is no rating.
Skills shall be anything which is needed like the tools we use, the code we create etc.
Example
^^^^^^^
.Git
* ct
* j6t
.autotools
* ct
.lumiera/renderpipe
* ichthyo
... shall this contain emails?
Tasks
^^^^^
* just set this page up .. either on this wiki or in a tiddlywiki which becomes checked into the repo
Pros
^^^^
* inter developer support and help network
Cons
^^^^
* privacy concerns, people might not publish what they know or better what they ''not'' know
Alternatives
^^^^^^^^^^^^
...urgs
Rationale
~~~~~~~~~
This only announces where people offer support within the lumiera developer community and is absolutely voluntary.
Comments
--------
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,115 @@
[grid="all"]
`------------`-----------------------
*State* _Idea_
*Date* _2008-10-05_
*Proposed by* link:Ichthyostega[]
-------------------------------------
Stream Type System
------------------
Especially in the Proc-Layer, we need a framework to deal with different "kinds" of media streams.
+
This is the foundation to be able to define what can be connected and to separate out generic parts and isolate specific parts.
Description
~~~~~~~~~~~
The general idea is that we need meta information, and -- more precisely -- that _we_ need to control the structure of this metadata. Because it has immediate consequences on the way the code can test and select the appropriate path to deal with some data or a given case. This brings us in a difficult situation:
* almost everything regarding media data and media handling is notriously convoluted
* because we can't hope ever to find a general umbrella, we need an extensible solution
* we want to build on existing libraries rather then re-inventing media processing.
* a library well suited for some processing task not necessarily has a type classification system which fits our needs.
The proposed solution is to create an internal Stream Type System which acts as a bridge to the detailed (implementation type) classification provided by the library(s). Moreover, the approach was chosen especially in a way as to play well with the rule based configuration, which is envisioned to play a central role for some of the more advanced things possible within the session.
Terminology
^^^^^^^^^^^
* *Media* is comprised of a set of streams or channels
* *Stream* denotes a homogeneous flow of media data of a single kind
* *Channel* denotes a elementary stream, which can't be further separated _in the given context_
* all of these are delivered and processed in a smallest unit called *Frame*. Each frame corresponds to a time interval.
* a *Buffer* is a data structure capable of holding a Frame of media data.
* the *Stream Type* describes the kind of media data contained in the stream
Levels of classification
^^^^^^^^^^^^^^^^^^^^^^^^
The description/classification of streams is structured into several levels. A complete stream type (implemented by a stream type descriptor) containts a tag or selection regarding each of these levels.
* Each media belongs to a fundamental *kind of media*, examples being _Video, Image, Audio, MIDI, Text,..._ This is a simple Enum.
* Below the level of distinct kinds of media streams, within every kind we have an open ended collection of *Prototypes*, which, whithin the high-level model and for the purpose of wiring, act like the "overall type" of the media stream. Everything belonging to a given Prototype is considered to be roughly equivalent and can be linked together by automatic, lossles conversions. Examples for Prototypes are: stereoscopic (3D) video versus the common flat video lacking depth information, spatial audio systems (Ambisonics, Wave Field Synthesis), panorama simulating sound systems (5.1, 7.1,...), stereophonic and monaural audio.
* Besides the distinction by prototypes, there are the various *media implementation types*. This classification is not necessarily hierarchically related to the prototype classification, while in practice commonly there will be some sort of dependency. For example, both stereophonic and monaural audio may be implemented as 96kHz 24bit PCM with just a different number of channel streams, but we may as well have a dedicated stereo audio stream with two channels multiplexed into a single stream.
Working with media stream implementations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For dealing with media streams of various implementation type, we need _library_ routines, which also yield a _type classification system_ suitable for their intended use. Most notably, for raw sound and video data we use the http://gmerlin.sourceforge.net/[GAVL] library, which defines a fairly complete classification system for buffers and streams. For the relevant operations in the Proc-Layer, we access each such library by means of a Facade; it may sound surprising, but actually we just need to access a very limited set of operations, like allocating a buffer. _Within_ the Proc-Layer, the actual implementation type is mostly opaque; all we need to know is if we can connect two streams and get an conversion plugin.
Thus, to integrate an external library into Lumiera, we need explicitly to implement such a Lib Facade for this specific case, but the intention is to be able to add this Lib Facade implementation as a plugin (more precisely as a "Feature Bundle", because it probably includes several plugins and some additional rules)
Link between implementation type and prototype
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
At this point the rules based configuration comes into play. Mostly, to start with, determining a suitable prototype for a given implementation type is sort of a tagging operation. But it can be supported by heuristic rules and an flexible configuration of defaults. For example, if confronted with a media with 6 sound channels, we simply can't tell if it's a 5.1 sound source, or if it's a pre mixed orchesrta music arrangement to be routed to the final balance mixing or if it's a prepared set of spot pickups and overdubbed dialogue. But a heuristic rule defaulting to 5.1 would be a good starting point, while individual projects should be able to set up very specific additional rules (probably based on some internal tags, conventions on the source folder or the like) to get a smooth workflow.
Moreover, the set of prototypes is deliberately kept open ended. Because some projects need much more fine grained control than others. For example, it may be sufficient to subsume any video under a single prototype and just rely on automatic conversions, while other projects may want to distinguish between digitized film and video NTSC and PAL. Meaning they would be kept in separate pipes an couldn't be mixed automatically without manual intervention.
connections and conversions
^^^^^^^^^^^^^^^^^^^^^^^^^^^
* It is _impossible to connect_ media streams of different kind. Under some circumstances there may be the possibility of a _transformation_ though. For example, sound may be visualized, MIDI may control a sound synthesizer, subtitle text may be rendered to a video overlay. Anyway, this includes some degree of manual intervention.
* Streams subsumed by the same prototype may be _converted_ lossless and automatically. Streams tagged with differing prototypes may be _rendered_ into each other.
* Conversions and judging the possibility of making connections at the level of implementation types is coupled tightly to the used library; indeed, most of the work to provide a Lib Facade consists of coming up with a generic scheme to decide this question for media streams implemented by this library.
Tasks
^^^^^
* draft the interfaces ([green]#✔ done#)
* define a fallback and some basic behaviour for the relation between implementation type and prototypes [,yellow]#WIP#
* find out if it is necessary to refer to types in a symbolic manner, or if it is sufficient to have a ref to a descriptor record or Facade object.
* provide a Lib Facade for GAVL [,yellow]#WIP#
* evaluate if it's a good idea to handle (still) images as a separate distinct kind of media
Alternatives
^^^^^^^^^^^^
Instead of representing types my metadata, leave the distinction implicit and instead implement the different behaviour directly in code. Have video tracks and audio tracks. Make video clip objects and audio clip objects, each utilizing some specific flags, like sound is mono or stereo. Then either switch, swich-on-type or scatter out the code into a bunch of virtual functions. See the Cinelerra source code for details.
In short, following this route, Lumiera would be plagued by the same notorious problems as most existing video/sound editing software. Which is, implicitly assuming "everyone" just does "normal" things. Of course, users always were and always will be clever enough to work around this assumption, but the problem is, all those efforts will mostly stay isolated and can't crystalize into a reusable extension. Users will do manual tricks, use some scripting or rely on project organisation and conventions, which in turn creates more and more coercion for the "normal" user to just do "normal" things.
To make it clear: both approaches discussed here do work in practice, and it's more a cultural issue, not a question guided by technical necessities to select the one or the other.
Rationale
* use type metadata to factor out generic behaviour and make variations in behaviour configurable.
* don't use a single classification scheme, because we deal with distinctions and decisions on different levels of abstraction
* don't try to create an universal classification of media implementation type properties, rather rely on the implementation libraries to provide already a classification scheme well suited for _their_ needs.
* decouple the part of the classification guiding the decisions on the level of the high level model from the raw implementation types, reduce the former to a tagging operation.
* provide the possibility to incorporate very project specific knowledge as rules.
Comments
--------
As usual, see the http://www.lumiera.org/wiki/renderengine.html#StreamType[Proc-Layer impl doku] for more information and implementation details.
Practical implementation related note: I found I was blocked by this one in further working out the details of the processing nodes wiring, and thus make any advance on the builder and thus to know more precisely how to organize the objects in the link:EDL/Session[]. Because I need a way to define a viable abstraction for getting a buffer and working on frames. The reason is not immediately obvious (because initially you could just use an opaque type). The problem is related to the question what kind of structures I can assume for the builder to work on for deciding on connections. Because at this point, the high-level view (pipes) and the low level view (processing functions with a number of inputs and outputs) need in some way to be connected.
The fact that we don't have a rule based system for deciding queries currently is not much of a problem. A table with some pre configured default answers for a small number of common query cases is enough to get the first clip rendered. (Such a solution is already in place and working.)
-- link:Ichthyostega[] 2008-10-05
Woops fast note, I didn't read this proposal completely yet. Stream types could or maybe should be coopertatively handled together with the backend. Basically the backend offers one to access regions of a file in a continous block, this regions are addressed as "frames" (this are not necessary video frames). The backend will keep indices which associate this memory management with the frame number, plus adding the capabilitiy of per frame metadata. This indices get abstracted by "indexing engines" it will be possible to have different kinds of indices over one file (for example, one enumerating single frames, one enumerating keyframes or gops). Such a indexing engine would be also the place to attach per media metadata. From the proc layer it can then look like +struct frameinfo* get_frame(unsigned num)+ where +struct frameinfo+ (not yet defined) is something like +{ void* data; size_t size; struct metadata* meta; ...}+
-- link:ct[] 2008-10-06
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,72 @@
Design Process : Tag Clouds for Resources
=========================================
[grid="all"]
`------------`-----------------------
*State* _Dropped_
*Date* _2008-07-15_
*Proposed by* link:PercivalTiglao[]
-------------------------------------
Tag Clouds for Resources
------------------------
Perhaps a Cloud of tags is unnecessary, but tagging resources similar to Youtube or like Tag Clouds allows for efficient searching and filtering. Anyone who uses the web would know how to use them. If a "Cloud of Tags" approach is used, then organizing the tags by some sort of frequency would be useful. IE: the more a specific tag is used, the larger it gets, or perhaps the more often that tag is searched on.
Description
~~~~~~~~~~~
Tasks
~~~~~
Pros
~~~~
* Simple GUI Concept
* Eases management of resources with Search
* Orthogonal to other resource management schemes like Folders
Cons
~~~~
Alternatives
~~~~~~~~~~~~
Rationale
~~~~~~~~~
Comments
--------
* Note: I was inspired with this idea during an email conversation with Rick777. -- link:PercivalTiglao[] [[DateTime(2008-07-17T14:29:57Z)]]
* Agreed, this is usefull. Also, more advanced config rules can make use of such tags and wiring can depend on them, for example to route your dialogue audio to another global bus than the music or ambiance.
-- link:Ichthyostega[] [[DateTime(2008-07-27T22:23:38Z)]]
Conclusion
----------
This Design Proposal is 'superseded' by a much more advanced proposal: link:DelectusShotEvaluator[Delectus]
(Dropping it doesn't mean disapproval)
''''
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,94 @@
Design Process : Time Handling
==============================
[grid="all"]
`------------`-----------------------
*State* _Final_
*Date* _2007-06-21_
*Proposed by* link:Ichthyostega[]
-------------------------------------
Time handling
-------------
How to handle time values in the code and which policy to aply to the "current" position
Description
~~~~~~~~~~~
. *Representation of time values*
* We use an uniform time type. Time is time, not frames, samples etc.
* All timings in Lumiera are based on integral datatypes
* We use a fixed, very fine grid, something of the sort of microseconds
* The internal representation is based on a `typedef int64_t gavl_time_t`
* We use a set of library routines and convenience-methods to
- Get time in different formats (fractional seconds, frame counts)
- Calculate with time values (overloaded operators)
* Time measurement is zero based (of course :-? )
. *Quantizing to a frame index or similar*
* Quantizing/rounding shall happen only once at a defined point in the calculation chain and, if in doubt, be done always as late as possible.
* Values needing to be quantized to time (grid) positions are calculated by half-way rounding, but the result should not depend on the actual zero-point of the scale (i.e. `floor(0.5+val)`, thus quant(0.5) yields 1, quant(0.49) yields 0, quant(-0.5) yields 0 )
. *Position of frames*
* Frame numbers are zero based and Frame 0 starts at time=0 (or whatever the nominal start time is)
* Each frame starts when the locator hits its lower border (inclusively) and ends when the locator is on its upper border (exclusively)
image:images/Lumi.FramePositions1.png[]
* When the locator snaps to frames this means it can be placed on the start positions of the frames solely
* When the locator is placed on such a start position, this means 'always' displaying the frame starting at this position, irrespective of playback direction.
. *Current position for keyframe nodes and automation*
* When parameter values for plugins/automation need to be retrieved on a per frame base (which normally is the case), for each frame there is a well defined 'point of evaluation' time position, irrespective of the playback direction
* There is no single best choice where to put this "POE", thus we provide a switch
image:images/Lumi.FramePositions2.png[]
- 'Point of evaluation' of the automation is in the middle of the timespan covered by a frame
- 'Point of evaluation' is on the lower bound of each frame
* Maybe additional position or fraction (?)
* Moreover, we provide an option to snap the keyframe nodes to the 'point of evaluation' within each frame or to be able to move them arbitrarily
* When keyframes are set by tweaking of parameters, they are located at the 'point of evaluation' position.
Tasks
~~~~~
* Figure out what has to be done when switching the "current position" policy on a existing project.
Alternatives
~~~~~~~~~~~~
Leave everything as in Cinelerra2, i.e. show frames after the locator has passed over them,
behave different when playing backwards and set the keyframes on the position of the locator but use them on the
frame actually to be shown (which differs according to the playback direction but is always "one off").
Why not? because it makes frame-precise working with keyframes a real pain and even creates contradictory situations when you
switch back and forward while tweaking.
Similar for the issues with quantized values. At first sight, e.g. directly using the frame numbers as coordinates (as Cinelerra does) seems to be clever, but on the long run we get lots of case distinctions scattered all over the code. Thus better use one uniform scheme and work with precise time values as long as possible and only quantize for rendering a given frame.
Rationale
~~~~~~~~~
The intention is to make time handling and calculations as uniform and "rational" as possible. We try to stick to the precise mathematical values and let the calculations just yield an result in an uniform manner, instead of sticking to "simple" values like frame counts or even a session-wide frame rate
. Time and interval calculations are tricky. Solve this question once and be done.
. Rounding is always dangerous, rounded values are not the more "clean" values. The floor-rounding rule is chosen, because the length of an interval after quantisation should not depend on the position in relation to the zero point. The usual mathematical rounding behaves "symmetrical" to the zero point, which could yield a different length after quantisation if an interval contains the zero point
. This is based on the analogy with real film running through a film projector (or the usual fencepost problem)
. While using the actual position of the locator as the "current" position for keyframes seems more natural at first, it crates problems when mixing footage with different framerate or when using a low-framerate proxy footage.
Comments
~~~~~~~~
* This is the summary of a discussion cehteh, Plouj and ichthyo just had on irc.
-- link:Ichthyostega[] [[DateTime(2007-06-21T05:12:03Z)]]
* We use GAVL now (needs to be included in the build system)
-- link:ct[] [[DateTime(2008-03-05T16:19:22Z)]]
* I've tidied up this old design proposal, we could make it final now. I've changed the rounding rule, please check if it's OK. In the original proposal, we wanted to use the common mathematical rounding rule, i.e. round(-x) = - round(x) . I changed this, because of the danger of interval lengths or alignment to "jump" dependant on the position in relation to the time zero point.
-- link:Ichthyostega[] [[DateTime(2008-10-04T22:47:54Z)]]
* Looks ok to me, maybe we should wrap up the gavl time handling in a very thin layer to unify our time functions and then cast this again into a authoritative testsuite/specification. Anyways I think this can be finalized.
-- link:ct[] [[DateTime(2008-10-06T06:44:21Z)]]
Conclusion
~~~~~~~~~~
* The adapted form of this proposal was *accepted* by October.2008 developer meeting.
* The proposed thin library layer to centralize time calculations shall be added on demand. When doing so, we need to add thorough test coverage for time calculations too.
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,134 @@
[grid="all"]
`------------`-----------------------
*State* _Idea_
*Date* _2008-11-03_
*Proposed by* link:Ichthyostega[]
-------------------------------------
Relation of Project, Timeline(s), Sequence(s) and Output generation
-------------------------------------------------------------------
In the course of our discussions it meanwhile became clear, that Lumiera will show multiple ''timeline-like'' views within one project. Similarly it's clear now that we will support [wiki:self:../EDLsAreMetaClips nested Sequences as meta-clips]. The purpose of this entry is to try to settle on some definitions and clarify the relationships between these concepts.
Definitions
~~~~~~~~~~~
Project:: the top-level context in which all edit work is done over an extended period of time. The Project can be saved and re-opened. It is comprised of the collection of all things the user is working on, it contains all informations, assets, state and objects to be edited.
Session:: the current in-memory representation of the Project when opened within an instance of Lumiera. This is an implementation-internal term. For the GUI and the users POV we should always prefer the term "Project" for the general concept.
Timeline:: the top level element within the Project. It is visible within a ''timeline view'' in the GUI and represents the effective (resulting) arrangement of media objects, resolved to a finite time axis, to be rendered for output or viewed in a Monitor (viewer window). Timeline(s) are top-level and may not be further combined. A timeline is comprised of:
* a time axis in abolute time (WIP: not clear if this is an entity or just a conceptual definition)
* a ''!PlayController''
* a list of global ''Pipes'' representing the possible outputs (master busses)
* exactly one top-level ''Sequence,'' which in turn may contain further nested Sequences
Timeline View:: a view in the GUI featuring a given timeline. There might be multiple views of the same timeline, all sharing the same !PlayController. A proposed extension is the ability to ''focus'' a timeline view to a sub-Sequence contained within the top-level sequence of the underlying Timeline. (Intended for editing meta-clips)
Sequence:: A collection of ''MObjects'' placed onto a tree of tracks. (this entity was former named ''EDL'' -- an alternative name would be ''Arrangement'' ). By means of this placement, the objects could be anchored relative to each other, relative to external objects, absolute in time. Placement and routing information can be inherited down the track tree, and missing information is filled in by configuration rules. This way, a sequence can connect to the global pipes when used as top-level sequence within a timeline, or alternatively it can act as a virtual-media when used within a meta-clip (nested sequence). In the default configuration, a Sequence contains just a single root track and sends directly to the master busses of the timeline.
Pipe:: the conceptual building block of the high-level model. It can be thought of as simple linear processing chain. A stream can be ''sent to'' a pipe, in which case it will be mixed in at the input, and you can ''plug'' the output of a pipe to another destination. Further, effects or processors can be attached to the pipe. Besides the global pipes (busses) in each Timeline, each clip automatically creates N pipes (one for each distinct content stream, i.e. normally N=2, namely video and audio)
link:PlayController[]:: coordinating playback, cueing and rewinding of a ''!PlayheadCursor'' (or multiple in case there are multiple views and or monitors), and at the same time directing a render process to deliver the media data needed for playback. Actually, the implementation of the !PlayController(s) is assumed to live in the backend.
link:RenderTask[]:: basically a !PlayController, but collecting output directly, without moving a !PlayheadCursor (maybe a progress indicator) and not operating in a timed fashion, but freewheeling or in background mode
Monitor:: a viewer window to be attached to a timeline. When attached, a monitor reflects the state of the timeline's !PlayController, and it attaches to the timeline's global pipes by stream-type match, showing video as monitor image and sending audio to the system audio port (Alsa or Jack). Possible extensions are for a monitor to be able to attach to probe points within the render network, to show a second stream as (partial) overlay for comparison, or to be collapsed to a mere control for sending video to a dedicated monitor (separate X display or firewire)
Relations
~~~~~~~~~
image:images/fig132741.png[Relation of Project Timeline Sequence Output]
.Some observations:
* this UML shows the relation of concepts, not so much their implementation
* within one Project, we may have multiple independent timelines and at the same time we may have multiple views of the same timeline.
* all playhead displays within different views linked to the ''same'' underlying timeline are effectively linked together, as are all GUI widgets representing the same !PlayController owned by a single timeline.
* I am proposing to do it this way per default, because it seems to be a best match to the users expectation (it is well known that multiple playback cursors tend to confuse the user)
* the timeline view is modeled to be a sub-concept of "timeline" and thus can stand-in. Thus, to start with, for the GUI it doesn't make any difference if it talks to a timeline view or a timeline.
* each timeline ''refers'' to a (top-level) sequence. I.e. the sequences themselves are owned by the project, and theoretically it's possible to refer to the same sequence from multiple timelines directly and indirectly.
* besides, it's also possible to create multiple independent timelines -— in an extreme case even so when referring to the same top-level sequence. This configuration gives the ability to play the same arrangement in parallel with multiple independent play controllers (and thus independent playhead positions)
* to complement this possibilities, I'd propose to give the ''timeline view'' the possibility to be focussed (re-linked) to a sub-sequence. This way, it would stay connected to the main play control, but at the same time show a sub-sequence ''in the way it will be treated as embedded within the top-level sequence.'' This would be the default operation mode when a meta-clip is opened (and showed in a separate tab with such a linked timeline view). The reason for this proposed handling is again to give the user the least surprising behaviour. Because, when -— on the contrary -— the sub-sequence would be opened as separate timeline, a different absolute time position and a different signal routing may result; doing such should be reserved for advanced use, e.g. when multiple editors cooperate on a single project and a sequence has to be prepared in isolation prior to being integrated in the global sequence (featuring the whole movie).
* one rather unconventional feature to be noted is that the ''tracks'' are within the ''sequences'' and not on the level of the global busses as in most other video and audio editors. The rationale is that this allows for fully exploiting the tree-structure, even when working with large and compound projects, it allows for sequences being local clusters of objects including overlays, masks etc. Especially this allows to use a sequence interchangeably as a virtual media and even use it at the same time as the contents of another top-level timeline.
Tasks
^^^^^
* Interfaces on the link:GUI/Proc[] level need to be fully specified. Especially, "Timeline" is now promoted to be a new top-level entity within the Session
* communication between the !PlayController(s) and the GUI need to be worked out
* the stream type system, which is needed to make this default connection scheme work, currently is just planned and drafted. Doing a exemplaric implementation for GAVL based streams is on my immediate agenda and should help to unveil any lurking detail problems in this design.
* with the proposed focusing of the timeline view to a sub-sequence, there are dark corner cases to be explored in detail to find out if this is possible; otherwise we'd need a new solution how to edit the embedded sub-sequences
* of course we need to re-check the intended interactions from the GUI viewpoint (both design and GUI implementation)
Discussion
~~~~~~~~~~
Pros
^^^^
* this design naturally scales down to behave like the expected simple default case: one timeline, one track, simple video/audio out.
* but at the same time it allows for bewildering complex setups for advanced use
* separating signal flow and making the track tree local to the sequence solves the problem how to combine independent sub-sequences into a compund session
Cons
^^^^
* it is complicated
* it is partially uncommon, but not fully revolutionary, and thus may be misleading.
* the control handling in GUI can become difficult (focus? key shortcuts?)
* the ability to have both separate timelines and timeline views can be very confusing. We really need to think about suitable UI design
* because the signal flow is separated from the tracks, we need to re-design the way how common controls (fader, pan, effect UIs) are integrated instead of just using the well-known approach
Alternatives
^^^^^^^^^^^^
* just one session, a list of tracks and don't cover the organisation of larger projects at all.
* allow only linear sequences with one track, not cluster-like sequences
* make the tracks synonymous with the global busses as usual. Use an allocation mechanism when "importing" separate sub-projects
* rather make compound projects a loosely coupled collection of stand-alone projects, which are just "played" in sequence. Avoid nested referrals.
* don't build nested structures, rather build one large timeline and provide flexible means for hiding and collapsing parts of it.
Rationale
^^^^^^^^^
Obviously, the usual solution was found to be limiting and difficult to work with in larger projects. On the other hand, the goal is not to rely on external project organisation, rather to make Lumiera support more complicated structures without complicated "import/export" rules or the need to create a specific master-document which is different from the standard timeline. The solution presented here seems to be more generic and to require fewer treating of special cases than the conventional approach would be.
Comments
--------
GUI handling could make use of expanded view features like ;
* drop down view of track, that just covers over what was shown below. This may be used for quick precis looks, or simple editions, or clicking on a subtrack to burrow further down.
* show expanded trackview in new tab. This creates another tabbed view which makes the full window are available for a "magnified" view. It is very easy to jump back to the main track view, or to other view tabs (edit points).
* The main track view could show highlights for "currently created" views/edit points, and whether they are currently being used or not (active/inactive).
* Each tab view could show a miniature view of the main track view (similar concept to linux desktop switching), to make it easy to figure out which other tabs to jump to, without having to go back to the main view. This can be a user option as not everybody would need this all of the time.
* the drop down view could have some icons on the dropdown top bar, which are positioned very close to the point on the track that was clicked on to trigger the drop down. This close proximity means that the mouse motion distance to commonly used (next) options, is very minimal. Icons for common options might include ; remove drop down view, create new tab view (active edit point), create edit point (but don't open a new tab - just create the highlight zone on the track), temporarily "maximise" the drop down view to the full window size (ie show the equivalent tab view in the current window).
* some of the "matrix" type view methods commonly used in spreadsheets, like lock horizontal and vertical positions (above OR Below, left OR Right of marker) for scrolling - this can also be used for determining limits of scroll.
* monitor view could include a toggle between show raw original track , show zoomed and other camera/projector render, or show full rendering including effects - this ties in with the idea of being able to link a monitor with viewing anywhere in the node system - but is able to be swiftly changed within the monitor view by icons mounted somewhere on each of the respective monitors' perimeter.
* the trackview itself, could be considered as a subview of a total-timeline-trackview, or some other method of "mapping out" the full project (more than one way of mapping it out may be made as optional/default views).
* this set of features are going to be very exciting and convenient to work with - a sort of google earth feature for global sized projects.
Tree 2008-12-19 22:58:30
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,49 @@
[grid="all"]
`------------`-----------------------
*State* _Parked_
*Date* _2008-03-05_
*Proposed by* link:ct[]
-------------------------------------
Todo Lists
----------
We need some way to organize tasks to be done (tiddlywiki, testsuite, ...?)
Description
~~~~~~~~~~~
Tasks
^^^^^
Pros
^^^^
Cons
^^^^
Alternatives
^^^^^^^^^^^^
Rationale
^^^^^^^^^
Comments
--------
We decided to use a Tiddlywiki for now until this is further worked out
-- link:ct[] [[DateTime(2008-03-08T03:38:50Z)]]
Back to link:Lumiera/DesignProcess[]

View file

@ -0,0 +1,65 @@
Design Process : Unit Tests Python
==================================
[grid="all"]
`------------`-----------------------
*State* _Dropped_
*Date* _2007-06-17_
*Proposed by* link:Ichthyostega[]
-------------------------------------
UnitTests in Python
-------------------
Use the Python scripting language for the actual Unit Tests and access the Cinelerra Code via SWIG
Description
~~~~~~~~~~~
Define Test classes in Python, using e.g. the link:PyUnit[] framework of the Python Standard lib. The SWIG compiler can generate wrapper code automatically, so we can access the C++ Classes and Facilities of Cinelerra as Python Modules and Classes. The Classes to be tested in Cinelerra need to provide some Interface for carrying out these tests (and this is one of the main benefits of the whole Test driven aproach).
Tasks
~~~~~
* Find out how the SWIG generated wrappers play together with Python's List and Map types. Without the ability to use the latter in the tests, this whole proposal is rather pointless.
* Think on how we can test video data processing (at least in its basics, e.g. does additive overlay work)
Pros
~~~~
Programming Unit and Self tests in a Scripting language facillates this task.
The X-Language bindings are quite usable today. As a side effect, it helps to get a clean program structure, because the tests need some Interface and/or some object factories to create the test
candidates. Python is proposed, because it is fairly mainstream, has a flat learning curve and but is moderately modern and functional-style at the same time.
Cons
~~~~
* Adds to the complexity
* Some old-style hackers have a quite distinct aversion against Python
Alternatives
~~~~~~~~~~~~
Rationale
~~~~~~~~~
Why am I proposing this? Out of lazyness. Python is there, many devs (on linux) have some Python skills, SWIG is not overly complicated to use.
And last but not least: just to get the discussion going... ;-)
Comments
--------
* I'd rather consider to use some embedded language in cinelerra which we can use to drive tests, should be something smaller and more sane than python. Needs certainly more discussion. For simple unit tests some C/C++ harness and bit shell scripting would suffice, I really want to integrate this with link:NoBug[].
-- link:ct[] [[DateTime(2007-06-17T17:32:27Z)]]
''''
Back to link:../DesignProcess[]

View file

@ -0,0 +1,180 @@
[grid="all"]
`------------`-----------------------
*State* _Idea_
*Date* _2008-10-31_
*Proposed by* link:Ichthyostega[]
-------------------------------------
Use Case analysis
-----------------
The only way to defeat "featuritis" is to build upon a coherent design --
+
which in turn relies upon a more or less explicit understanding what the application should be like, and the way the prospective user is thought to work with the program. Today, a generally accepted ''method'' for building up such an understanding is to do a ''use case analysis.'' Such a formal analysis would require to identify all usage scenarios with the involved actors and parts of the system, and then to refine them in detail and break them down into distinct use cases. Here, I'll try a rather informal variant of such an analysis. I'll restrain myself to describing the most important usage situations.
''please participate in the discussion. It well may be that everything detailed here is self-evident, but I doubt so. At least the grouping and the omissions reflect sort-of a focus of the project''
Describing basic Lumiera usage situations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The fundamental assumption is that the user works on a project, which is reflected in the fact that the user is working on a single session over an extended period of time (several hours to several years). External media will be imported and incorporated into this session, additional media will be created within this session, and finally there is at least one render or export procedure to harvest the results of this work.
Scenario (1) : Exploring Media
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Various external media files are opened. You play, cue and examine the media. Tagging, labeling and adding notes. Marking of interesting points and ranges. Possibly breaking down into clips, or at least extract some ranges as clips. Draft arranging the clips, applying some effects to check the result and thus to find out about the viability of the footage. Playback of several media at the same time (several videos, but also video and music). Grouping of assets (media, clips, effects, markers) into folders.
Scenario (2) : Simple assembly
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You build up a simple linear cut sequence. Either by
- using a single source media, trimming it and cutting away (a small number of) unwanted parts
- playing source media and spilling over (insert, overwrite) some parts into the final assembly
- dragging over pre-organised clips from clip folders to build up the assembly.
Sound is either used immediately as-is (the soundtrack attached to the media), or there is a similarly simple, linear music bed. Some people prefer to switch sound off entirely for this kind of work. In any case, the link is either automatic, or rather vague and soft (as music being vaguely correlated)
Scenario (3) : Augmenting an assembly
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Without the intention to rework it from scratch, an already existing simple assembly is augmented, beautified and polished, maybe to conform with professional standards. This includes the "rescue" of a somewhat questionable assembly by repairing localized technical problems, but also shortening and re-arranging, and in extreme cases even changing the narrative structure.
A distinctive property of this usage scenario is that work happens rather in the context of ''tasks'' (passes) -- not so much isolated operations:
- the task may be to get the rhythm or overall tempo right, and thus you go over the sequence and do trim, roll, shuffle or slide edits.
- you may want to "fold-out" parts of the sound, thus interweaving o-sound and music
- there may be a sound overdubbing and replacing pass
- you may want to walk certain automation curves and adjust levels (sound volume or tone, fade, brightness/contrast/colour)
- general polishing may include adding title overlays, fading in and out, adding (typically a single type of) transition(s) in a coherent manner
Scenario (4) : Compositional work
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Here I define ''compositional work'' as a situation where you deal with multiple more or less independent sequences going on in parallel, similar to a musical score. Frequently, we encounter compositional parts embedded in a otherwise linear work, and often those parts evolve when Scenario (3) is driven to the extreme.
- the most common situation is that o-sound, sound design and music work together with the temporal structure created in the image edits.
- a movie with a complex narrative structure may induce compositional work on a very large scale (and existing applications frequently fall short on supporting such)
- compositing often leads to compositional work. Special FX, masked objects being arranged, artificial elements to be integrated.
- similarly any collage like or heavily layered arrangements lead themselves to requiring compositional work.
The common distinctive property of all those situations is: objects are embedded into a primary context and have to obey the rules of this context, and at the same time have a close correlation to other objects which are embedded in a completely different ("orthogonal") context. (To give a catchy example: assume, a CG monster has to be integrated. Besides the masked monster object, you have several colouring and blurring layers at completely different levels in the layering order, and at the same time you have correlated sound objects, which need to be integrated into the general soundscape. And now your primary job is to get the movement and timings of the monster right in relation to the primary timing grid established by the existing edit)
The working style and thus the tool support necessary for compositional work is completely different to Scenario (3). After an innitial buildup (which often is very systematic), the working profile can be characterized by tweaks to various parameters to be done in-sync at widely separated sites within the session, together with repeated cycles of "do it", "assess the result", "undo all and do some small detail differently". Typically there is the need for much navigation (contrast this to Scenario (3) where you work in "passes")
Scenario (5) : Working with Sound
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The degree of integrating sound work is worth discussing. Often, due to limitations in existing tools, sound work is done in separate applications to a large extent. Which in turn forces the whole production into a sequential organisation scheme. First the edit has to be roughly final, and then the sound people can step in. (Of course this is an simplification). To list the common operations:
- cleaning and preparing original sound
- fitting sound library elements or separately produced sound
- overdubbing
- playing or building music to match the rhythm of the edit or the original footage
- montage of dialogue and/or noise correlated to the primary content of the sequence
- sound design, shaping the pace and the feel of a sequence
- final balance mix
While clearly some of those tasks are always better done within a dedicated application, the ability to carry out this work partially within the main session and even while the basic edit is still in flux -- may open new artistic possiblilities.
Scenario (6) : Large Projects
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
At first sight, the operations and the work to be done in large projects is the same as in small ones. But large projects tend to create sort of an additional "layer" on top of the usage scenarios described thus far, which will "kick in" at various places.
- work may be divided upon several editors, working on separate parts (sequences) which then need to be re-integrated
- there may be a global asset organisation (naming scheme), which will be extended locally, resulting in nested naming scopes.
- some quite basic stuff needs to be done in a coherent fashion, e.g. titles, a certain transition (template), the way fade-outs are done, a certain colour profile. Possibly, this stuff needs to be adjusted all over the project.
- there will be a general (large scale) timing grid and probably there is the need to navigate to the different parts of the whole project.
- there may be the necessity to build several versions of the same project in parallel (e.g. a short version and a extended director's cut)
- you may have to care for such nasty and tedious things as keeping sub-titles in-sync while the edit is still in flux
- you may want to do integration builds, where you add placeholders just for the purpose to get an impression of the work as a whole.
Scenario (7) : Teamwork
^^^^^^^^^^^^^^^^^^^^^^^
Several people work on a project.
- A longer sequence might be split up into parts, each one edited by another person. The parts will be collected and assembled by the chief editor. Edits to the parts will still be possible, but a system of permissions allows to lock down access to the material.
- Arrangements based on the same resources can be branched, tagged and merged.
- Edits are logged with usernames
- Markers can be shown/hidden on a per creator base.
- Team members need ways to share and store notes and suggestion for each other work. Annotations can be added to clips, markers or arrangements
- A pen tool could allow to scribble on top of frames or arrangements. An expressive and fast way to leave suggestions about deletions, movements and all other kinds of edits.
Scenario (8) : Script driven
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The application is started ''headless'' (without GUI) and controlled via an API. Either an existing session is loaded, or a new session is
created and populated. Then, some operations have to be done in a systematic manner, requiring a way to address parts of the session both unambiguously and in a way easy to access and control from a programming environment (you can't just ''see'' the right clip, it needs to be tagged). Finally, there might be an export or render step. A variation of this scenario is the automatic extraction of some informations from an existing project.
Discussion
~~~~~~~~~~
.Pros
* describing such scenarios, even if hypothetical, create an anchor or point of referral for feature/GUI design work to be done in detail
* relating features to working situations helps to see what is really important and what is rather of technical merrit
* compiling and discussing this list helps shaping the character of the application as a whole
* the above compilation relates individual features to a general production process.
* the goal of this compilation is to be ''fairly complete''
.Cons
* any of those descriptions is artificial
* sometimes it is better to develop an application technology driven, especially when it is technologically challenging to get it to work properly.
* having such a large-scale vision may frighten people away which otherwise might jump in and implement some crazy but valuable new feature
* the listed usage scenarios intend to be ''fairly complete,'' which can be a limitation or even self-deception. Better have an open ended list.
* the above compilation seems quite conventional and explicitly leaves out some scenarios
- networked, distributed scenarios, compound applications
- television, life video, !VeeJay-ing
- cartoons, animations, game design
.Alternatives
* avoiding a general plan, just sharing a vague general vision
* just start out with one scenario directly at hand (e.g. the simple assembly) and not worrying about the rest
* rather then defining those scenarios (which are necessarily hypothetical), rather stick to the operation level. E.g. a use case would be rather "trim a clip"
* doing a complete state-of-the art UML use case analysis.
* after having created the foundation, rather stick to an XP approach, i.e. implement, integrate and release small "usage stories"
Rationale
^^^^^^^^^
Well, after having considered, compiled and written such an concept, altogether avoiding a big picture view of the application is not longer an option. To the other extreme, we neither have the resources, nor the circumstances for doing a rigid and formal analysis. Finally, the XP approach really sounds promising, and it should be clear that it is in no way ruled out. Nothing hinders us to have a detailed vision, but then to implement small usage stories which fit into this vision.
Besides, another consideration. The above compilation builds upon the notion, that there is a common denominator of film making craft, a core editing art, which has been shaped in the first 100 years of cinema, and which won't go away within the next generation, even if the technological and practical circumstances of production change quite dramatically.
Comments
--------
Template e.g. for regular TV series
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Constraints to fit all contents within fixed timeline, cover topic, select collage of iconic scenes from archived and collected footage.
Update intro and credit roll for each episode.
Add in stopmotion, and 3D model animations with vocal commentaries.
Gather together separate items from "outworkers".
Tree
(@)SIG(@)
Back to link:Lumiera/DesignProcess[]