diff --git a/src/vault/mem/extent-family.hpp b/src/vault/mem/extent-family.hpp index e8f779aa2..b2ccf7018 100644 --- a/src/vault/mem/extent-family.hpp +++ b/src/vault/mem/extent-family.hpp @@ -50,6 +50,9 @@ //#include "lib/util.hpp" //#include +#include +#include +#include namespace vault{ @@ -64,14 +67,43 @@ namespace mem { * @todo WIP-WIP 7/2023 * @see NA_test */ + template class ExtentFamily : util::NonCopyable { + using Storage = std::array; + + struct Extent + : std::unique_ptr + { + /** + * @note default ctor immediately allocates the full storage, + * but uses default initialisation rsp. no initialisation + * in case the payload type T is a POD + */ + Extent() + : std::unique_ptr{new Storage} + { } + }; + using Extents = std::vector; + + Extents extents_; + + size_t start_,after_; public: explicit - ExtentFamily() + ExtentFamily(size_t initialCnt =0) + : extents_{initialCnt} + , start_{0} + , after_{initialCnt} { } + + void + reserve (size_t expectedMaxExtents) + { + extents_.reserve (expectedMaxExtents); + } }; diff --git a/tests/vault/mem/extent-family-test.cpp b/tests/vault/mem/extent-family-test.cpp index e90202995..e9eea8e18 100644 --- a/tests/vault/mem/extent-family-test.cpp +++ b/tests/vault/mem/extent-family-test.cpp @@ -72,7 +72,7 @@ namespace test { void simpleUsage() { - ExtentFamily extents; + ExtentFamily extents{5}; } diff --git a/wiki/renderengine.html b/wiki/renderengine.html index 36cf79869..f522c69b1 100644 --- a/wiki/renderengine.html +++ b/wiki/renderengine.html @@ -6183,7 +6183,7 @@ This is the core service provided by the player subsystem. The purpose is to cre :any details of this processing remain opaque for the clients; even the player subsystem just accesses the EngineFaçade -
+
//Integration effort to promote the development of rendering, playback and video display in the GUI//
 This IntegrationSlice was started in {{red{2023}}} as [[Ticket #1221|https://issues.lumiera.org/ticket/1221]] to coordinate the completion and integration of various implementation facilities, planned, drafted and built during the last years; this effort marks the return of development focus to the lower layers (after years of focussed UI development) and will implement the asynchronous and time-bound rendering coordinated by the [[Scheduler]] in the [[Vault|Vault-Layer]]
 
@@ -6202,14 +6202,14 @@ __May.23__: taking a //prototyping approach// now, since further development was
 * ✔ augment the {{{DummyJob}}} to allow tracing Job invocations in tests
 * ✔ build a {{{MockJobTicket}}} on top, implemented as subclass of the actual JobTicket
 * ✔ build a {{{MockSegmentation}}} to hold onto ~JobTickets, which can be created as Mock
-* ✔define a simple specification language (based on the existing {{{GenNode}}}-DSL to define segments, tickets and prerequisite jobs
+* ✔ define a simple specification language (based on the existing {{{GenNode}}}-DSL to define segments, tickets and prerequisite jobs
 * ✔ implement a »~Split-Splice« algorithm for → SegmentationChange, rigged accordingly to generate a mocked Segementation for now
 * ✔ create a testbed to assemble a JobPlanningPipeline step by step (→ [[#920|https://issues.lumiera.org/ticket/920]] and [[#1275|https://issues.lumiera.org/ticket/1275|]])
 
 __June23__: building upon this prototyping approach, the dispatcher pipeline could be rearranged in the form of a pipeline builder, allowing to retract the originally used implementation scheme based on »Monads«. The implementation of the Dispatcher is complete, yet the build up of the [[»Render Drive« #1301|https://issues.lumiera.org/ticket/1301]] could not reasonably be completed, due to lack of a clear-shaped ''Scheduler interface''.
 
 __July23__: this leads to a shift of work focus towards implementing the [[Scheduler]] itself.
-The Scheduler will be structured into two Layers, where the lower layer is implemented as //priority queue// (using the STL). So the most tricky part to solve is the representation of //dependencies// between jobs, with the possible extension to handling IO operations asynchronously. Analysis and planning of the implementation indicate that the [[scheduler memory managment|SchedulerMemory]] can be based on //Extents//, which are interpreted as »Epochs« with a deadline. These considerations imply the next steps for building up the Scheduler functionality
+The Scheduler will be structured into two Layers, where the lower layer is implemented as //priority queue// (using the STL). So the most tricky part to solve is the representation of //dependencies// between jobs, with the possible extension to handling IO operations asynchronously. Analysis and planning of the implementation indicate that the [[scheduler memory managment|SchedulerMemory]] can be based on //Extents//, which are interpreted as »Epochs« with a deadline. These considerations imply what steps to take next for building up Scheduler functionality and memory management required for processing a simple job
 * 🗘 build a first working draft for the {{{BlockFlow}}} allocation scheme [[#1311|https://issues.lumiera.org/ticket/1311]]
 * ⌛ define and cover the basic [[Activities|RenderActivity]] necessary to implement a plain-simple-Job (without dependencies)
 * ⌛ pass such an Activity through the two layers of the Scheduler
@@ -7054,10 +7054,10 @@ Later on we expect a distinct __query subsystem__ to emerge, presumably embeddin
 
 → QuantiserImpl
-
+
//Invoke and control the dependency and time based execution of  [[render jobs|RenderJob]]//
 The Scheduler acts as the central hub in the implementation of the RenderEngine and coordinates the //processing resources// of the application. Regarding architecture, the Scheduler is located in the Vault-Layer and //running// the Scheduler is equivalent to activating the »Vault Subsystem«. An EngineFaçade acts as entrance point, providing high-level render services to other parts of the application: [[render jobs|RenderJob]] can be activated under various timing and dependency constraints. Internally, the implementation is segregated into two layers
-;Layer-2: Control
+;Layer-2: Coordination
 :maintains a network of interconnected [[activities|RenderActivity]], tracks dependencies and observes timing constraints
 ;Layer-1: Invocation
 :operates a low-level priority scheduling mechanism for time-bound execution of [[activities|RenderActivity]]
@@ -7073,25 +7073,29 @@ The time-based ordering and prioritisation of [[render activities|RenderActivity
 !Usage pattern
 The [[Language of render activities|RenderActivity]] forms the interface to the scheduler -- new activities are defined as //terms// and handed over to the scheduler. This happens as part of the ongoing job planning activities -- and thus will be performed //from within jobs managed by the scheduler.// Thus the access to the scheduler happens almost entirely from within the scheduler's realm itself, and is governed by the usage scheme of the [[Workers|SchedulerWorker]].
 
-These ''Worker Threads'' will perform actual render activities most of the time (or be idle). However -- idle workers contend for new work, and for doing so, they //also perform the internal scheduler management activities.// As a consequence, all Scheduler coordination and [[memory management|SchedulerMemory]] is ''performed non-concurrent'': only a single Worker can acquire the {{{GroomingToken}}} and will then perform managment work until the next render activity is encountered.
+These ''Worker Threads'' will perform actual render activities most of the time (or be idle). However -- idle workers contend for new work, and for doing so, they //also perform the internal scheduler management activities.// As a consequence, all Scheduler coordination and [[memory management|SchedulerMemory]] is ''performed non-concurrent'': only a single Worker can acquire the {{{GroomingToken}}} and will then perform managment work until the next render activity is encountered at the top side of the //priority queue.//
+
+→ [[Activity|RenderActivity]]
+→ [[Memory|SchedulerMemory]]
+→ [[Workers|SchedulerWorker]]
 
-
-
//The Scheduler uses an »Extent« based memory management scheme known as ''BlockFlow''.//
-The organisation of rendering happens in terms of [[Activities|RenderActivity]], which may bound by //dependencies// and limited by //deadlines.// For the operational point of view this implies that a sequence of allocations must be maintained to „flow through the Scheduler“ -- in fact, only references to these {{{Activity}}} records are passed, while the actual descriptors reside at fixed memory locations. This is essential to model dependencies and conditional execution structures efficiently. At some point however, any {{{Activity}}} record will either be //performed// or //obsoleted// -- and this leads to the idea of managing the allocations in memory extents termed as »Epochs«
+
+
//The Scheduler uses an »Extent« based memory management scheme known as {{{BlockFlow}}}.//
+The organisation of rendering happens in terms of [[Activities|RenderActivity]], which may bound by //dependencies// and limited by //deadlines.// For the operational point of view this implies that a sequence of allocations must be able to „flow through the Scheduler“ -- in fact, only references to these {{{Activity}}}-records are passed, while the actual descriptors reside at fixed memory locations. This is essential to model the dependencies and conditional execution structures efficiently. At some point however, any {{{Activity}}}-record will either be //performed// or //obsoleted// -- and this leads to the idea of managing the allocations in //extents// of memory here termed as »Epochs«
 * a new Activity is planted into a suitable //Epoch,// based on its deadline
-* it is guaranteed to sit at a fixed memory location while it potentially can be activated
-* based on the deadlines, at some point a complete strike of activities can be reasoned to be //obsolete.//
-* this allows to discard a complete Extent without any further checks and processing (trivial destructors!)
+* it is guaranteed to sit at a fixed memory location while it can be activated potentially
+* based on the deadlines, at some point a complete strike of activities can be reasoned to be //obsoleted.//
+* this allows to discard a complete Extent //without any further checks and processing// (assuming trivial destructors!)
 
 !Safeguards
-This is a rather fragile composition and chosen here for performance reasons; while activities are interconnected, there memory locations are adjacent, improving cache coherence. Moreover, most of the dependency processing and managing of activities happens single-threaded, while some [[worker|SchedulerWorker]] holds the {{{GroomingToken}}}; so most of the processing is local and does not require memory barriers.
+This is a rather fragile composition and chosen here for performance reasons; while activities are interconnected, their memory locations are adjacent, improving cache locality. Moreover, most of the dependency processing and managing of activities happens single-threaded, while some [[worker|SchedulerWorker]] holds the {{{GroomingToken}}}; so most of the processing is local and does not require memory barriers.
 
-Unfortunately this also implies that most safety barriers of the C++ language are removed or circumvented. A strict processing regime must be established, with clear rules as to when activities may be accessed.
+Unfortunately this tricky arrangement also implies that many safety barriers of the C++ language are circumvented. A strict processing regime must be established, with clear rules as to when activities may, or may no longer be accessed.
 * each »Epoch« gets an associated //deadline//
 * when the next [[job|RenderJob]] processed by a worker starts //after this Epoch's deadline//, the worker //has left the Epoch.//
-* when all workers have left an Epoch, only ''pending async IO tasks'' need to be considered, since such IO task can always be delayed for an extended period of time. For an IO task, buffers need to be prepared, and those buffers are indirectly tied to the job depending on them.
-* ⟹ thus a count of pending IO activities must be maintained //for each Epoch//  -- implemented by the same mechanism also employed for dependencies between render jobs, namely a notification leading to decreasing a local counter
+* when all workers have left an Epoch, only ''pending async IO tasks'' need to be considered, since such IO task can always be delayed for an extended period of time. For an IO task, buffers need to be kept available, and those buffers are indirectly tied to the job depending on them.
+* ⟹ thus a count of pending IO activities must be maintained //for each Epoch//  -- implemented by the same mechanism also employed for dependencies between render jobs, which is a notification message causing a local counter to be decremented.
 
diff --git a/wiki/thinkPad.ichthyo.mm b/wiki/thinkPad.ichthyo.mm index 61e5b1f1b..709841ae9 100644 --- a/wiki/thinkPad.ichthyo.mm +++ b/wiki/thinkPad.ichthyo.mm @@ -78301,6 +78301,12 @@ Date:   Thu Apr 20 18:53:17 2023 +0200
+ + + + + + @@ -78535,6 +78541,18 @@ Date:   Thu Apr 20 18:53:17 2023 +0200
+ + + + + + +

+ ein Extent ist ein uninitialisierter Speicherblock +

+ +
+
@@ -78548,12 +78566,90 @@ Date:   Thu Apr 20 18:53:17 2023 +0200
+ + + + + + +

+ Begründung: +

+
    +
  • + Performance... +
  • +
  • + mehr Dynamik wird gar nicht benötigt, da das BlockFlow-Schema noch die Dauer einer Epoche justieren kann +
  • +
+ +
+ +
+ + + + + + + + + + + + + + + + + + + + + + + +

+ das heißt: es findet zwar eine default-Initialisierung statt, aber für einen Objekt-Typ bedeutet das value-Initialisierung der Member. Nur falls der Payload-Typ ein POD ist, findet keine Initialisierung statt — und ich hoffe, daß der Optimizer das checkt +

+ +
+ + +
+
+ + + + + + +
+ + +
+ + + + + + + + + + + + + + +
@@ -79307,9 +79403,21 @@ Date:   Thu Apr 20 18:53:17 2023 +0200
+ + + + + + + + + + + +