remembered that some years ago I had to deal with a very similar problem
for planning the frame rendering jobs. It turned out, that the
iterator monad developed for this looks promising for our task at hand
...for the very specific situation when we want
to explore an existing data structure, and the
exploration assumes value semantics.
The workaround then is to use pointers as values.
...attempt to build it based on the monadic iterator primitives.
Only problem is: need to find out relation between nodes
after the fact. In the real usage situation, this
is not a problem, since we have a state object
there, which can track the relation as it is established
especially the exploration stack is pushed down
first successful definition of all the JobPlanning classes
just the framework of classes necessary to pass the compiler;
all implementation is still stubbed
brainstorming how to implement the job planning stage
the idea is to built on top of the IterExplorer,
but have the "stack" of re-evaluation integrated
into a custom type, which exploits the static
node network structure to avoid heap allocations
solution idea: again use a builder function?
the template _Fun started as an internal helper
for function-closure, but seems to be of
general use. Thus move it into meta/function.hpp
(function-closure.hpp is heavyweight)
this enables expansion of a (functional) data structure
until exhaustion -- which is what we need to
build job functors by traversing and expanding
an arbitrarily nested job definition structure
the intention is to use this to simplify
generating render jobs based on the elaborated
dependency network of the render nodes. The key
challenge is to overcome the necessity to
store partially done evaluations as
continuation
the tricky part seems to be how to combine the
source iterators into a new monad instance, while
keeping this "Combinator" Strategy configurable
...just passes the compiler, while still lacking
even the generic implementation of joining
together the source iterators
The idea is to avoid building a data structure
for intermediary results, while still being able
to process a variably sized and arbitrary shaped
set of source data