2011-05-11 Lumiera Developers Meeting
=====================================
:Author: Ichthyostega
:Date: 2012-05-15
May 11, 2011 on #lumiera 20:00 - 23:23 UTC +
__Participants__
* cehteh
* ichthyo
* daylife
_Transcript prepared by Ichthyo_
Render Engine Interface
-----------------------
_Ichthyo_ had prepared a new
link:/documentation/devel/rfc_pending/EngineInterfaceSpec.html[RfC]
about the actual interface between the
link:/documentation/design/architecture/playRender.html[Player subsystem]
and the Renderengine/Scheduler.
The following discussion turned into considering several detaled and technical
topics regarding the Jobs to be scheduled, how to create and attach new jobs,
handle memory ...
- Engine Interface ⟺ Scheduler Interface
- Latency of GUI reaction
- Aborting of Jobs, rescheduling, Job dependencies
- Ressource handling, Callbacks after Modifications
- one central service ⟺ integrating with external libs/services
- explicit wait state on Jobs?
- Interfaces and components working together
- Responsibilities: who cares for what part
- Dedicated Time facility
- animating the GUI during playback
- interaction between proc and backend
- handling of prerequisite ressources
Conclusion
~~~~~~~~~~
_Ichthyo_ will take on responsibility for that interface and care to translate
the calls into actual Jobs to be handed to the Scheduler. The Scheduler interface
itself will remain rather low-level and straight forward. The _callback_ facilities
described in the mentioned RfC need to be implemented explicitly on top of this
Scheduler interface. Ichthyo will care for that, and the integration with the
Fixture datastructure.
Next meeting
------------
The next meeting will be as usual, at Wednesday June 8, 20:00 UTC
''''
++++
++++
[[irctranscript]]
IRC Transcript
--------------
- xref:scheduler[Engine and Scheduler interface]
- xref:latency[Latency of GUI reaction]
- xref:jobproperties[Aborting of Jobs, rescheduling, Job dependencies]
- xref:resourcecleanup[Ressource handling, Callbacks after Modifications]
- xref:externalrequirements[Requirements for integration with external libs/services]
- xref:waitstate[explicit wait state on Jobs?]
- xref:architecture[Interfaces and components working together?]
- xref:responsibilities[Responsibilities: who cares for what part]
- xref:timer[Dedicated Timer facility]
- xref:guianimation[Animating the GUI during playback]
- xref:collaborations[Interaction between proc and backend]
- xref:prerequisites[Handling of prerequisite ressources]
[[scheduler]]
.-- Engine Interface ⟺ Scheduler Interface --
[caption="☉Transcript☉ "]
----------------------------
[2011-05-11 22:08:54] ichthyo: how do we want to proceed with the engine interface,
discuss it point by point or only a overview and edit comments in the rfc?
[2011-05-11 22:09:08] well I thought rather the latter...its a rough draft
[2011-05-11 22:10:52] well generally i agree with your proposal (i'm just reading it)
with some refinements/ideas i want to add
[2011-05-11 22:12:48] well, probably the most important new view angle is that
I looked on all those 'play mode changes'
[2011-05-11 22:12:51] QoS: do we want constants there or could you think about a 'functor' which reacts more dynamic?
[2011-05-11 22:13:20] well your proposal sounded like constants (to be defined)
[2011-05-11 22:13:35] yes, its constants, but it seems natural to extend that
[2011-05-11 22:13:47] i think about a functor or even a trait-class where you can query things
[2011-05-11 22:14:19] this class can then react more dynamic
[2011-05-11 22:14:22] ok, yes, so that you could e.g. combine several independent degrees of freedom
[2011-05-11 22:14:31] yes
[2011-05-11 22:16:02] then we can ask precisely "shall we render this frame now?"...
instead generally "do we render frames under timing pressure?"
[2011-05-11 22:16:47] and of course also other questions, accuracy, syncronization and so on
(...)
[2011-05-11 22:18:17] the other thing is about sequences --
the backend delivers only single frames, you set up relations
(this frame comes after that frame)
[2011-05-11 22:18:44] i would like if we can put prefetching and driving sequences
into the player/scheduler and not in the backend at all
[2011-05-11 22:18:56] yeah... I figured that there is some discrepancy
[2011-05-11 22:19:33] well for me there is no much difference,
i am already optimizing for sequences of course but
there is no need to pass that explicitly
[2011-05-11 22:19:33] so somehow we'd find a way to come from such sequences down to
the individual frames which can be deliverd
[2011-05-11 22:20:22] yet there is one nasty problem
[2011-05-11 22:20:42] all those changes I'm writing about here are more or less connected to such a sequence a s awhole
[2011-05-11 22:22:02] a 'job' .. or more precicely a 'frame-render-job' can have some state,
like references to related (following) jobs or so on
[2011-05-11 22:22:15] i am not sure yet if we want to put that there or in the player
[2011-05-11 22:23:04] maybe even the player is better but that makes the player a little more complex
as it doesnt just need to shoot and forget but also care for a lot
(potentially aborted) jobs
----------------------------
[[latency]]
.-- Latency of GUI reaction --
[caption="☉Transcript☉ "]
----------------------------
[2011-05-11 22:22:49] e.g think about the following scenario:
[2011-05-11 22:23:06] we're in the middle of playing, and now we want to switch to double speed playback
[2011-05-11 22:23:39] yes then the player cancels every other frame job and changes its timebase
[2011-05-11 22:23:56] or instead canceling it could put them on a lower priority
[2011-05-11 22:24:01] that means, the player/scheduler/whatever needs to find out a point in time,
*when* this switch can be performed
[2011-05-11 22:24:13] and then cancel all existing jobs after that point and reschedule new jobs
[2011-05-11 22:24:40] so that means the player might get considerably more heavy with all this bookkeeping
[2011-05-11 22:24:40] note I thought it helps if we don't guarantee immediate reaction
[2011-05-11 22:25:00] for the user it doesn't count if the speedup happens 100ms later
[2011-05-11 22:25:13] 100ms yes ... thats noticeable
[2011-05-11 22:25:22] ok, 50ms or so
[2011-05-11 22:25:45] 30 ms is noticable
[2011-05-11 22:26:41] there should be a fast-path for time and priority rescheduling
[2011-05-11 22:27:33] fast path == explcit interface for only that purpose
[2011-05-11 22:27:36] I'm not sure what you're discussing exactly, but one of the things that bothers me most
about the Lightworks beta is the noticable lag when navigating the timeline,
playing, etc, in contrast to Avid, which immediately, no delay whatsoever,
responds to any user action
[2011-05-11 22:29:01] user actions should give *instant* feedback --
even if this feedback is only fake feedback in corner cases
[2011-05-11 22:29:21] * ichthyo nods
[2011-05-11 22:29:22] editing is on that part a bit like playing piano. if there were any delay,
no matter how short, between pressing a key and hearing a note, it would be very irritating
[2011-05-11 22:29:32] exacty
[2011-05-11 22:29:39] Indeed
----------------------------
[[jobproperties]]
.-- Aborting of Jobs, rescheduling, Job dependencies --
[caption="☉Transcript☉ "]
----------------------------
[2011-05-11 22:33:02] but caveat .. maybe we need to reschedule dependent jobs too
[2011-05-11 22:33:24] yeah... likely
[2011-05-11 22:33:38] so far you wanted to provide jobs in a conflictless nondeadlocking way
[2011-05-11 22:34:08] but for rescheduling that means likely that dependency information needs
to be passed and maintained in the scheduler as well
[2011-05-11 22:34:12] that doesn't need to be immediately the level within the scheduler
[2011-05-11 22:34:29] well it would be much easier if it is
[2011-05-11 22:35:09] well... thus the idea to allow for a (small) tolerance interval
[2011-05-11 22:35:14] then rescheduling automatically walks all dependencies, reschedules them too
and by that it can already see if we can met timing constraints
[2011-05-11 22:36:37] the 'reschedule' can answer if rescheduling is possible
[2011-05-11 22:37:07] so you just iterate over all frame-job to be rescheduled from shortest to longest time
[2011-05-11 22:37:36] the most urgend reschedules may return "hey this job is already running, cant reschedule"
[2011-05-11 22:37:49] or even "already done, nothing to do"
[2011-05-11 22:38:03] to "sorry, cant do that dave"
[2011-05-11 22:38:15] to "ok rescheduling acknowledged"
[2011-05-11 22:38:32] any other returns possible? i think thats all
[2011-05-11 22:39:39] ok, but now, just by doing that walk, we've found out a time point when the change will happen
[2011-05-11 22:40:33] and that would be the information interesting for the layers above
[2011-05-11 22:40:56] e.g. to change the animation speed of the cursor, or whatever
[2011-05-11 22:41:40] if you want async rescheduling you make that a job by itself
[2011-05-11 22:42:41] so for the player it might reschedule the next 500ms worth of frames synchronously
and any later frames (background rendering) asynchronously
[2011-05-11 22:42:04] or more concrete... when we do an 'abort'
[2011-05-11 22:42:12] e.g. because the user changed the model
[2011-05-11 22:42:24] then I'm sure I need that asynchroneous feedback
[2011-05-11 22:43:14] ...i.e. that other part doesn't have a timer or scheduler on its own,
but rather relies on that callback
[2011-05-11 22:43:19] we never abort 'running' jobs .. only waiting or scheduled jobs
[2011-05-11 22:43:48] but we can (and need) to 'abort' whole calculation streams
----------------------------
[[resourcecleanup]]
.-- Ressource handling, Callbacks after Modifications --
[caption="☉Transcript☉ "]
----------------------------
[2011-05-11 22:45:27] do you really need that feedback?
[2011-05-11 22:45:47] how about 'abort' gives instant feedback ..
[2011-05-11 22:45:53] or even 'never fails'
[2011-05-11 22:46:09] just abort a job and consider it dead from your point of view
[2011-05-11 22:46:14] note: the point is: in that other part triggering the 'abort'
[2011-05-11 22:46:23] there is no timer or scheduler running
[2011-05-11 22:46:44] so it doesn't help to know *when* the abort is executed
[2011-05-11 22:47:03] but I need to be notified *after* it has happened
[2011-05-11 22:47:09] jobs are self containing, any resources they need are attached to the job,
they should not reference any volatile data up in proc
[2011-05-11 22:47:39] directly, yes of course -- but indirectly thats impossible
[2011-05-11 22:49:49] in that concrete example, quite simply, because the higher layers can do some stuff
only after the jobs are gone
[2011-05-11 22:49:57] e.g. close the session
[2011-05-11 22:50:03] really?
[2011-05-11 22:50:25] well... I can't just blow away a session which is still used by ongoing jobs
[2011-05-11 22:51:04] jobs have no backpointers to the session
[2011-05-11 22:51:05] of course I don't mutate the data which is referred by the jobs
[2011-05-11 22:52:09] nah not only reference data i am really thinking that jobs have absolutely
no references into the session
[2011-05-11 22:52:18] they have references into the backend data
[2011-05-11 22:52:23] but these are managed there
[2011-05-11 22:52:37] while closing a session doesnt require to shutdown the backend
[2011-05-11 22:53:01] well, so to clarify that: there *is* some kind of back reference for sure.
Just it isn't direct. See for example: a job references an exit node
[2011-05-11 22:53:35] and it references an output slot
[2011-05-11 22:53:38] really?
[2011-05-11 22:53:47] how about a functor there
[2011-05-11 22:53:50] both are things which are somehow allocated from the session
[2011-05-11 22:53:55] jobs references a continutation job
[2011-05-11 22:54:16] which is an abstraction of the exit node
[2011-05-11 22:54:36] well anyways .. the callback *is* trivially possible
[2011-05-11 22:54:48] but do not expect that i want to do rescheduling for aborting jobs
[2011-05-11 22:55:30] well... as said: jobs can't be aborted
[2011-05-11 22:55:38] but whole calculation streams can
[2011-05-11 22:55:42] if you place a job to be executed in 1 hour and then abort this job,
this abort callback might be called not earlier than in one hour
(or if your really want we can add a time-limit there for rescheduling)
[2011-05-11 22:57:34] no! this callback job would not be scheduled to that time of the original job
(1 hour in the future) but to that point when the scheduler can guarantee that
non of the old jobs will become active anymore because then I can dispose the data,
change connections, deregister output slots and so on.
Thus all I want for this respect is a guarantee that the working function of that job
can't get active anymore
[2011-05-11 23:00:41] do you see my point?
[2011-05-11 23:00:46] yes i see your point
[2011-05-11 23:01:33] it doesn't help me to know *now* that 3 frames into the future will still be calculated
and delivered, but starting with the 4.th frame, the abort will be in effect
[2011-05-11 23:01:44] this information doesn't help me *now*
[2011-05-11 23:02:09] because, until these 3 frames are delivered, I can't deregister the "output slot"
[2011-05-11 23:02:17] e.g. disconect an audio client from Jack server
[2011-05-11 23:02:57] RAII :P .. garbage collecton would be better in this case
[2011-05-11 23:03:07] not the slightest
[2011-05-11 23:04:10] deregistering an external connection has nothing to do with how the resources are tracked.
[2011-05-11 23:04:22] rather I need to know *when* its safe to do so
(...)
[2011-05-11 23:07:56] cehteh: actually I'd put an adaptation layer in between.
When viewed from the player (or the session for that), I only want an high-level 'abort'
operation. Like I proposed in my RfC. I didn't expect to talk directly to the scheduler
or any part of the engine. So at the interface of the scheduler, that can be implemented
so that it fits in best there
[2011-05-11 23:08:18] well i am thinking in low level terms (as usual)
[2011-05-11 23:09:09] i write some low-level notes for the interface (nothing concrete)
----------------------------
[[externalrequirements]]
.-- one central service ⟺ integrating with external libs/services --
[caption="☉Transcript☉ "]
----------------------------
[2011-05-11 23:09:58] Just as a striking example
[2011-05-11 23:10:13] think about the occasional crashes with the dummy player currently in the GUI
[2011-05-11 23:10:24] that is exactly due to a lack of such a callback
[2011-05-11 23:10:26] yes while i was thinking we can decouple this completely, but maybe not
[2011-05-11 23:10:41] the GUI closes the window. The only thing I can do is to block that closing
[2011-05-11 23:10:57] but without external help, I don't know how long to block
[2011-05-11 23:11:33] and I we don't block, then the last frame gets disposed into an output buffer
which is already gone, deregistered from XV and whatsoever
[2011-05-11 23:11:45] not ever, but sometimes
[2011-05-11 23:12:18] well i was thinking that the scheduler and backend only operate on their own data
and you can just drop that on the floor from player and proc (and gui)
[2011-05-11 23:12:37] that means currently the player presents some buffers and the mockup player renders there
[2011-05-11 23:13:02] this renders into the gui provied buffer which the gui may release too early right?
[2011-05-11 23:13:34] but if this buffer is provided by the backend and the gui just closes down, not displaying it
anymore then nothing happens, the backend knows how to manage resources
[2011-05-11 23:14:29] well... true, but a general mismatch
[2011-05-11 23:14:34] usually it works the other way round
[2011-05-11 23:14:49] I don't know how it's for ALSA, but also for jack it works the other way round
[2011-05-11 23:15:01] all those systems assume that you have a thread running just to serve them
[2011-05-11 23:15:06] yes
[2011-05-11 23:15:13] and require you to set up buffers to place things into
[2011-05-11 23:15:24] Jack is even more rigourous
[2011-05-11 23:15:32] yes i see now
[2011-05-11 23:15:54] so thats an important general mismatch we should come up with a nice solution maybe
[2011-05-11 23:16:13] but well i still wonder if we may put such a callback even more into the
backend "resource released" ...
[2011-05-11 23:16:18] mhm maybe not
[2011-05-11 23:16:48] has pros and cons .. not all resources have jobs associated
[2011-05-11 23:16:16] of course it would be great, if the backend just delivers in time,
right into the buffer for output
[2011-05-11 23:16:39] so that we don't need another buffering and copy over step on the output side
[2011-05-11 23:17:06] but still in real the jack buffers will come from the backend
[2011-05-11 23:17:13] so the backend can send a notify
[2011-05-11 23:21:23] cehteh: AFAIK, the jack buffers come from the jack server
[2011-05-11 23:22:02] same for any gui widgets to display video
[2011-05-11 23:21:44] and not from the backend
[2011-05-11 23:21:49] ichthyo: the backend can still manage that
[2011-05-11 23:22:44] so why should the backend try to manage buffers,
which are actually managed by an other (external) system??
[2011-05-11 23:22:47] generally i want it the other way around that the backend allocates/mmaps buffers,
but i know that there are a lot things which do it the other way around
[2011-05-11 23:23:02] that means we can do 2 things: copying into the destination buffer
[2011-05-11 23:23:11] or manage this destination buffers as well
[2011-05-11 23:24:08] or (3rd option): let the last calculation step work into those external destination buffers
[2011-05-11 23:24:35] thats case 2 .. when we manage them, then we calculate there with our normal toolchain
[2011-05-11 23:25:03] not quite; we use them, but we don't manage them
[2011-05-11 23:25:27] proc just asks for a buffer and doesnt need to care from where it comes
(RAM, Graphics card, audio server ...)
[2011-05-11 23:26:11] if we dont manage them but only 'use' then then we get some more code paths
for every different kind of buffer
[2011-05-11 23:26:08] and thats not all
[2011-05-11 23:26:14] there are similar problems
[2011-05-11 23:26:26] e.g. jack doesn't run in our threading sytem
[2011-05-11 23:26:30] yes, rather shared memory
[2011-05-11 23:27:04] so another question is: can we even be so precise?
we don't have low-latency permissions, as the jack server has
so that means some additional buffering or copying anyway
[2011-05-11 23:27:31] we manage buffers provided by jack
[2011-05-11 23:27:49] if we cant render in time, we cant copy in time for sure
[2011-05-11 23:27:55] ah... if you call that 'managing' then yes
[2011-05-11 23:28:33] so we have different memory backend, first our own mmaped persistent file storage
[2011-05-11 23:28:46] then later maybe memory on the gpu
[2011-05-11 23:28:37] nah: 'managing' means owning, allocating, deallocating
[2011-05-11 23:29:12] well we ask jack for memory and we tell jack when we dont need it anymore
[2011-05-11 23:29:33] and i think i can do that better in the backend just like any other memory resource
[2011-05-11 23:29:40] but actually I don't see why we should
[2011-05-11 23:29:49] why can't we just cooperate with these services?
[2011-05-11 23:29:56] as everyone else does too?
[2011-05-11 23:30:01] i call that cooperation?
[2011-05-11 23:30:41] I don't see why the backend needs to pseudo-manage things which are just out of the scope,
and actually owned and managed by someone else?
[2011-05-11 23:30:49] the question is only if every single thing does this ad-hoc --
aka the gui cares for video buffers, the audio player cares for sound buffers,
proc cares for rendering buffers and so on
or we have one backend which cares about all buffers
the coding effort is almost the same
[2011-05-11 23:31:46] but we get one 'memory' service which can deliver different classes of memory
(and later easily adaptable/extendable to new technologies)
[2011-05-11 23:32:01] I'd rather doubt that. In all those cases, we're using already existing
libraries and solutions. and we're rather integrating with them
[2011-05-11 23:32:23] yes so there is no much effort doing so ..
[2011-05-11 23:32:54] that extends even more to the media libs, we're intend to use
[2011-05-11 23:32:55] but since the backend always knows what memory is in use it can do a clean shutdown
[2011-05-11 23:33:18] without all this callbacks and asyncronous messages
[2011-05-11 23:33:34] not really
[2011-05-11 23:33:49] if we use external services, we're bound to the terms of these
[2011-05-11 23:33:56] same for external libraries
[2011-05-11 23:34:14] yes
[2011-05-11 23:35:04] if we have a nice and blastingly fast mmaped file read and buffer handling for it than fine
[2011-05-11 23:35:41] then I'd use that exactly for those sevices, which *we* write, provide and manage
[2011-05-11 23:36:08] possibly we just do it as you saying, and we may implement it in the way as i think later
(and then try out and compare) .. or stay with your way
[2011-05-11 23:37:29] maybe yes... probably I'd concentrate first on that core part which we provide,
because I'd guess it is the most important one
i.e. reading those source media files and the like
[2011-05-11 23:37:46] well i have some reasons to do it in my way --
for example if you connect the viewer somewhere in the graph,
then it would be nice if it can transparently use memory on the graphics card
instead copying over. Or think about render nodes and network transparency later
[2011-05-11 23:38:48] but this is future and i think it is not a fundamental design decision can can be
incrementally added when it makes sense and turns out to work well
[2011-05-11 23:39:07] first i am going for the mmaped file access and nothing else
[2011-05-11 23:39:07] yes, probably better approaching it that way
----------------------------
[[waitstate]]
.-- explicit wait state on Jobs? --
[caption="☉Transcript☉ "]
----------------------------
[2011-05-11 23:44:34] another question.
[2011-05-11 23:44:51] do our jobs have some mechanism to 'yield' or wait or similar?
[2011-05-11 23:45:10] i.e. go in a wait state (and make room for other jobs to be scheduled meanwhile)?
[2011-05-11 23:45:34] waiting on other jobs
[2011-05-11 23:45:49] i am thinking if i make a generic waiting or only on jobs
[2011-05-11 23:45:54] so you call into a funciton on the lumiera scheduler
[2011-05-11 23:45:55] i think only on jobs is good
[2011-05-11 23:46:06] which blocks and reschedules other jobs meanwhile?
[2011-05-11 23:46:23] a running job?
[2011-05-11 23:46:34] just brainstorming
[2011-05-11 23:46:45] both is thinkable
[2011-05-11 23:46:56] also it would be thinkable that we disallow any waits
[2011-05-11 23:47:03] nope i do not want that, rather split the jobs up into small pieces
[2011-05-11 23:47:13] and just say, then the job function has to terminate and another job has to pick up
[2011-05-11 23:47:27] yes thats possible
[2011-05-11 23:48:00] but when a job runs it runs until its done (which might be a errorneous/abort on its own)
[2011-05-11 23:48:20] after running a job wont become rescheduled
[2011-05-11 23:48:49] if it turns out we need a yield i could implement it .. but not prematurely
[2011-05-11 23:49:44] but likely for our purposes we don't even need that
[2011-05-11 23:50:16] yes and thats not domain of the scheduler
[2011-05-11 23:50:32] jobs can block anyways
[2011-05-11 23:50:40] in running state
[2011-05-11 23:50:53] no need to put them on a wait queue or whatever
[2011-05-11 23:51:31] but of course the goal is to write them such that they don't block
[2011-05-11 23:52:11] and the only thing worthwile to wait for in a special wait state is for some other job
[2011-05-11 23:52:38] the next question is .. do we release this waiting jobs by some signal or by a job termination?
[2011-05-11 23:53:14] first can be more efficent and will be more complex
[2011-05-11 23:53:30] so for first i opt for the later, there is prolly anyways something to do
[2011-05-11 23:54:02] how does this 'waiting' actually work?
[2011-05-11 23:54:16] does the jobfunction call into some API of the scheduler?
[2011-05-11 23:54:21] i make a explicit waiting container (llist) jobs waiting sit there and are not scheduled
[2011-05-11 23:54:47] (or maybe that uses a priority queue to, to detect expired jobs)
[2011-05-11 23:55:02] the sheduler wraps the jobfunction
[2011-05-11 23:55:10] ok
[2011-05-11 23:55:29] so that means, he gets control only when the jobfunction exits
[2011-05-11 23:55:51] for simplicity and a first implementation yes i think that suffices
[2011-05-11 23:55:59] * ichthyo thinks the same
----------------------------
[[architecture]]
.-- Interfaces and components working together --
[caption="☉Transcript☉ "]
----------------------------
[2011-05-12 00:01:37] so now we talked about the scheduler interface --
which isnt really the engine interface --
BUT: the job structure needs to carry all resources involved around
[2011-05-12 00:02:34] think abut the W there are more interfaces involved
[2011-05-12 00:02:43] ok... that brings up another closely related question
[2011-05-12 00:02:59] the engine interface needs something like these calculation streams
[2011-05-12 00:03:19] that means, that the individual job needs to cary a marker, so it can be associated
to a specific stream
[2011-05-12 00:02:59] first, asking the backend for a frame
[2011-05-12 00:03:58] i add at least one generic void* to the job where you can put any data along
[2011-05-12 00:04:12] type safety on the level above please :P
[2011-05-12 00:05:20] I'm fine with that
[2011-05-12 00:05:27] (caveat about the lifetime and ownership of this void*)
[2011-05-12 00:05:52] if the job owns it then we need also a cleanup hook there
(...)
[2011-05-12 00:10:44] next would be a interface how to setup a job... possibly in multiple stages
[2011-05-12 00:11:59] job_new(); job_wait_for(another_job); job_data()=some_calculation_stream;
[2011-05-12 00:12:06] job_arm()
[2011-05-12 00:12:14] something like that
[2011-05-12 00:12:38] sounds ok
[2011-05-12 00:12:53] well .. now i think it becomes clear that 'jobs' are our
message passing interface between proc and backend
[2011-05-12 00:13:15] you dont ask the backend directly for a frame you schedule a job delivering a frame
[2011-05-12 00:13:29] the backend schedules a job rendering a frame if its not cached
[2011-05-12 00:13:35] yes, my thinking too
(...)
[2011-05-12 00:14:50] so 'job' is a baseclass .. but there can be highlevel subclasses
cache_fetch_job and render_frame_job
[2011-05-12 00:15:04] really?
[2011-05-12 00:15:11] i wonder -- makes sense or?
[2011-05-12 00:15:38] *could* be done, but somehow goes against that simplicity
[2011-05-12 00:15:53] I mean each of this class shares a lot, how abortion is handled and more
[2011-05-12 00:16:43] Ok on the low level you can always put a job together with all the little
details needed and then arm it. I expect that we have maybe half a dozen classes of jobs
[2011-05-12 00:17:03]