lumiera_/src/lib/diff/tree-mutator.hpp

395 lines
13 KiB
C++
Raw Normal View History

/*
TREE-MUTATOR.hpp - flexible binding to map generic tree changing operations
Copyright (C) Lumiera.org
2015, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/** @file tree-mutator.hpp
** Customisable intermediary to abstract generic tree mutation operations.
** This is the foundation for generic treatment of tree altering operations,
** and especially the handling of changes (diff) to hierarchical data structures.
** The goal is to represent a standard set of conceptual operations working on
** arbitrary data structures, without the need for these data structures to
** comply to any interface or base type. Rather, we allow each instance to
** define binding closures, which allows to tap into arbitrary internal data
** representation, without any need of disclosure. The only assumption is
** that the data to be treated is \em hierarchical and \em object-like,
** i.e. it has (named) attributes and it may have a collection of children.
** If necessary, typing constraints can be integrated through symbolic
** representation of types as chained identifiers. (path dependent types).
**
** The interface implemented by the TreeMutator is shaped such as to support
** the primitives of Lumiera's tree \link diff-language.hpp diff handling language. \endlink
** By default, each of these primitives is implemented as a \c NOP -- but each operation
** can be replaced by a binding closure, which allows to invoke arbitrary code in the
** context of the given object's implementation internals.
**
** ## Builder/Adapter concept
** TreeMutator is both an interface and a set of building blocks.
** On concrete usage, the (private, non disclosed) target data structure is assumed
** to _build a subclass of TreeMutator._ To this end, the TreeMutator is complemented
** by a builder API. Each call on this builder -- typically providing some closure --
** will add yet another decorating layer on top of the basic TreeMutator (recall all
** the "mutation primitives" are implemented NOP within the base class). So the actual
** TreeMutator will be structured like an onion, where each layer cares for the sole
** concrete aspect it was tied for by the supplied closure. For example, there might
** be a decorator to handle setting of a "foobar" attribute. Thus, when the diff
** dictates to mutate "foobar", the corresponding closure will be invoked.
**
** \par test dummy target
** There is a special adapter binding to support writing unit tests. The corresponding
** API is only declared (forward) by default. The TestMutationTarget is a helper class,
** which can be attached through this binding and allows a unit test fixture to record
** and verify all the mutation operations encountered.
**
** @note to improve readability, the actual implementation of the "binding layers"
** is defined in separate headers and included towards the bottom of this header.
**
** @see tree-mutator-test.cpp
** @see DiffDetector
**
*/
#ifndef LIB_DIFF_TREE_MUTATOR_H
#define LIB_DIFF_TREE_MUTATOR_H
#include "lib/error.hpp"
#include "lib/symbol.hpp"
#include "lib/meta/trait.hpp"
#include "lib/diff/gen-node.hpp"
#include "lib/opaque-holder.hpp"
#include "lib/iter-adapter-stl.hpp"
//#include "lib/util.hpp"
//#include "lib/format-string.hpp"
#include <functional>
#include <utility> ////TODO
#include <string>
//#include <vector>
//#include <map>
namespace lib {
/////////////////////////////TODO move over into opaque-holder.hpp
/**
* handle to allow for safe _»remote implantation«_
* of an unknown subclass into a given OpaqueHolder buffer,
* without having to disclose the concrete buffer type or size.
* @remarks this is especially geared towards use in APIs, allowing
* a not yet known implementation to implant an agent or collaboration
* partner into the likewise undisclosed innards of the exposed service.
2016-03-25 21:40:30 +01:00
* @warning the type BA must expose a virtual dtor, since the targeted
* OpaqueHolder has to take ownership of the implanted object.
*/
template<class BA>
class PlantingHandle
{
void* buffer_;
size_t maxSiz_;
///////TODO static assert to virtual dtor??
public:
template<size_t maxSiz>
PlantingHandle (InPlaceBuffer<BA, maxSiz>& targetBuffer)
: buffer_(&targetBuffer)
2016-03-10 20:15:19 +01:00
, maxSiz_(maxSiz)
{ }
template<class SUB>
BA&
create (SUB&& subMutator)
{
if (sizeof(SUB) > maxSiz_)
throw error::Fatal("Unable to implant implementation object of size "
"exceeding the pre-established storage buffer capacity."
,error::LUMIERA_ERROR_CAPACITY);
using Holder = InPlaceBuffer<BA, sizeof(SUB)>;
Holder& holder = *static_cast<Holder*> (buffer_);
return holder.create<SUB> (std::forward<SUB> (subMutator));
}
template<class SUB>
bool
canCreate() const
{
return sizeof(SUB) <= maxSiz_;
}
};
/////////////////////////////TODO move over into opaque-holder.hpp
namespace diff{
namespace error = lumiera::error;
//using util::_Fmt;
using lib::Literal;
using std::function;
using std::string;
class TestMutationTarget; // for unit testing
namespace {
settle on a concrete implementation approach based on inheritance chain After some reconsideration, I decide to stick to the approach with the closures, but to use a metaprotramming technique to build an inheritance chain. While I can not decide on the real world impact of storing all those closures, in theory this approach should enable the compiler to remove all of the storage overhead. Since, when storing the result into an auto variable right within scope (as demonstrated in the test), the compiler sees the concrete type and might be able to boil down the actual generated virtual function implementations, thereby inlining the given closures. Whereas, on the other hand, if we'd go the obvious conventional route and place the closures into a Map allocated on the stack, I wouldn't expect the compiler to do data flow analysis to prove this allocation is not necessary and inline it away. NOTE: there is now guarantee this inlining trick will ever work. And, moreover, we don't know anything regarding the runtime effect. The whole picture is way more involved as it might seem at first sight. Even if we go the completely conventional route and require every participating object to supply an implementation of some kind of "Serializable" interface, we'll end up with a (hand written!) implementation class for each participating setup, which takes up space in the code segment of the executable. While the closure based approach chosen here, consumes data segment (or heap) space per instance for the functors (or function pointers) representing the closures, plus code segment space for the closures, but the latter with a way higher potential for inlining, since the closure code and the generated virtual functions are necessarily emitted within the same compilation unit and within a local (inline, not publickly exposed) scope.
2015-04-05 18:26:49 +02:00
template<class PAR>
struct Builder;
using ID = Literal;
using Attribute = DataCap;
}
/**
* Customisable intermediary to abstract mutating operations
* on arbitrary, hierarchical object-like data.
* The TreeMutator exposes two distinct interfaces
* - the \em operation API -- similar to what a container exposes --
* is the entirety of abstract operations that can be done to the
* subsumed, tree like target structure
* - the \em binding API allows to link some or all of these generic
* activities to concrete manipulations known within target scope.
*/
class TreeMutator
{
public:
/* ==== operation API ==== */
virtual bool
emptySrc ()
{
return true;
// do nothing by default
}
/** skip next src element and advance abstract source position */
virtual void
skipSrc ()
{
// do nothing by default
}
/** establish new element at current position */
settle on a concrete implementation approach based on inheritance chain After some reconsideration, I decide to stick to the approach with the closures, but to use a metaprotramming technique to build an inheritance chain. While I can not decide on the real world impact of storing all those closures, in theory this approach should enable the compiler to remove all of the storage overhead. Since, when storing the result into an auto variable right within scope (as demonstrated in the test), the compiler sees the concrete type and might be able to boil down the actual generated virtual function implementations, thereby inlining the given closures. Whereas, on the other hand, if we'd go the obvious conventional route and place the closures into a Map allocated on the stack, I wouldn't expect the compiler to do data flow analysis to prove this allocation is not necessary and inline it away. NOTE: there is now guarantee this inlining trick will ever work. And, moreover, we don't know anything regarding the runtime effect. The whole picture is way more involved as it might seem at first sight. Even if we go the completely conventional route and require every participating object to supply an implementation of some kind of "Serializable" interface, we'll end up with a (hand written!) implementation class for each participating setup, which takes up space in the code segment of the executable. While the closure based approach chosen here, consumes data segment (or heap) space per instance for the functors (or function pointers) representing the closures, plus code segment space for the closures, but the latter with a way higher potential for inlining, since the closure code and the generated virtual functions are necessarily emitted within the same compilation unit and within a local (inline, not publickly exposed) scope.
2015-04-05 18:26:49 +02:00
virtual void
injectNew (GenNode const&)
{
// do nothing by default
}
/** ensure the next source element matches with given spec */
virtual bool
matchSrc (GenNode const&)
{
// do nothing by default
return false;
}
/** accept existing element, when matching the given spec */
virtual bool
acceptSrc (GenNode const&)
{
// do nothing by default
return false;
}
/** repeatedly accept, until after the designated location */
virtual bool
accept_until (GenNode const&)
{
// do nothing by default
return false;
}
/** locate designated element and accept it at current position */
virtual bool
findSrc (GenNode const&)
{
// do nothing by default
return false;
}
/** locate the designated target element
* (must be already accepted into the target sequence).
2016-03-25 21:40:30 +01:00
* Perform an assignment with the given payload value
* @throw when assignment fails (typically error::Logic)
* @return false when unable to locate the target */
virtual bool
assignElm (GenNode const&)
{
// do nothing by default
return false;
}
using MutatorBuffer = PlantingHandle<TreeMutator>;
/** locate the designated target element
2016-03-25 21:40:30 +01:00
* and build a suitable sub-mutator for this element
* into the provided target buffer
* @throw error::Fatal when buffer is insufficient
* @return false when unable to locate the target */
virtual bool
mutateChild (GenNode const&, MutatorBuffer)
{
// do nothing by default
return false;
}
virtual void setAttribute (ID, Attribute&) { /* do nothing by default */ }
settle on a concrete implementation approach based on inheritance chain After some reconsideration, I decide to stick to the approach with the closures, but to use a metaprotramming technique to build an inheritance chain. While I can not decide on the real world impact of storing all those closures, in theory this approach should enable the compiler to remove all of the storage overhead. Since, when storing the result into an auto variable right within scope (as demonstrated in the test), the compiler sees the concrete type and might be able to boil down the actual generated virtual function implementations, thereby inlining the given closures. Whereas, on the other hand, if we'd go the obvious conventional route and place the closures into a Map allocated on the stack, I wouldn't expect the compiler to do data flow analysis to prove this allocation is not necessary and inline it away. NOTE: there is now guarantee this inlining trick will ever work. And, moreover, we don't know anything regarding the runtime effect. The whole picture is way more involved as it might seem at first sight. Even if we go the completely conventional route and require every participating object to supply an implementation of some kind of "Serializable" interface, we'll end up with a (hand written!) implementation class for each participating setup, which takes up space in the code segment of the executable. While the closure based approach chosen here, consumes data segment (or heap) space per instance for the functors (or function pointers) representing the closures, plus code segment space for the closures, but the latter with a way higher potential for inlining, since the closure code and the generated virtual functions are necessarily emitted within the same compilation unit and within a local (inline, not publickly exposed) scope.
2015-04-05 18:26:49 +02:00
/**
* start building a custom adapted tree mutator,
* where the operations are tied by closures or
* wrappers into the current implementation context.
*/
static Builder<TreeMutator> build();
};
namespace { // Mutator-Builder decorator components...
settle on a concrete implementation approach based on inheritance chain After some reconsideration, I decide to stick to the approach with the closures, but to use a metaprotramming technique to build an inheritance chain. While I can not decide on the real world impact of storing all those closures, in theory this approach should enable the compiler to remove all of the storage overhead. Since, when storing the result into an auto variable right within scope (as demonstrated in the test), the compiler sees the concrete type and might be able to boil down the actual generated virtual function implementations, thereby inlining the given closures. Whereas, on the other hand, if we'd go the obvious conventional route and place the closures into a Map allocated on the stack, I wouldn't expect the compiler to do data flow analysis to prove this allocation is not necessary and inline it away. NOTE: there is now guarantee this inlining trick will ever work. And, moreover, we don't know anything regarding the runtime effect. The whole picture is way more involved as it might seem at first sight. Even if we go the completely conventional route and require every participating object to supply an implementation of some kind of "Serializable" interface, we'll end up with a (hand written!) implementation class for each participating setup, which takes up space in the code segment of the executable. While the closure based approach chosen here, consumes data segment (or heap) space per instance for the functors (or function pointers) representing the closures, plus code segment space for the closures, but the latter with a way higher potential for inlining, since the closure code and the generated virtual functions are necessarily emitted within the same compilation unit and within a local (inline, not publickly exposed) scope.
2015-04-05 18:26:49 +02:00
using lib::meta::Strip;
using lib::meta::Types;
/**
* Type rebinding helper to pick up the actual argument type.
* Works both for functors and for lambda expressions
* @remarks Solution proposed 10/2011 by \link http://stackoverflow.com/users/224671/kennytm user "kennytm" \endlink
* in this \link http://stackoverflow.com/questions/7943525/is-it-possible-to-figure-out-the-parameter-type-and-return-type-of-a-lambda/7943765#7943765
* answer on stackoverflow \endlink
* @todo this should be integrated into (\ref _Fun) //////////////////////////////////////TICKET #994
*/
template<typename FUN>
struct _ClosureType
: _ClosureType<decltype(&FUN::operator())>
{ };
template<class C, class RET, typename...ARGS>
struct _ClosureType<RET (C::*)(ARGS...) const>
{
using Args = typename Types<ARGS...>::Seq;
using Ret = RET;
using Sig = RET(ARGS...);
};
template<class RET, typename...ARGS>
struct _ClosureType<RET (*)(ARGS...)>
{
using Args = typename Types<ARGS...>::Seq;
using Ret = RET;
using Sig = RET(ARGS...);
};
template<typename FUN, typename SIG>
struct has_Sig
: std::is_same<SIG, typename _ClosureType<FUN>::Sig>
{ };
2016-03-18 20:52:35 +01:00
/* == implementation detail headers == */
2016-03-18 20:52:35 +01:00
#include "lib/diff/tree-mutator-attribute-binding.hpp"
#include "lib/diff/tree-mutator-collection-binding.hpp"
template<class PAR>
struct TestWireTap;
2016-03-18 20:52:35 +01:00
/**
* Builder-DSL to create and configure a concrete TreeMutator
* @remarks all generated follow-up builders are chained and
* derive from the implementation of the preceding
* "binding layer" and the TreeMutator interface.
*/
settle on a concrete implementation approach based on inheritance chain After some reconsideration, I decide to stick to the approach with the closures, but to use a metaprotramming technique to build an inheritance chain. While I can not decide on the real world impact of storing all those closures, in theory this approach should enable the compiler to remove all of the storage overhead. Since, when storing the result into an auto variable right within scope (as demonstrated in the test), the compiler sees the concrete type and might be able to boil down the actual generated virtual function implementations, thereby inlining the given closures. Whereas, on the other hand, if we'd go the obvious conventional route and place the closures into a Map allocated on the stack, I wouldn't expect the compiler to do data flow analysis to prove this allocation is not necessary and inline it away. NOTE: there is now guarantee this inlining trick will ever work. And, moreover, we don't know anything regarding the runtime effect. The whole picture is way more involved as it might seem at first sight. Even if we go the completely conventional route and require every participating object to supply an implementation of some kind of "Serializable" interface, we'll end up with a (hand written!) implementation class for each participating setup, which takes up space in the code segment of the executable. While the closure based approach chosen here, consumes data segment (or heap) space per instance for the functors (or function pointers) representing the closures, plus code segment space for the closures, but the latter with a way higher potential for inlining, since the closure code and the generated virtual functions are necessarily emitted within the same compilation unit and within a local (inline, not publickly exposed) scope.
2015-04-05 18:26:49 +02:00
template<class PAR>
struct Builder
: PAR
{
Builder(PAR par)
: PAR(par)
{ }
template<class CLO>
using Change = ChangeOperation<PAR,CLO>;
template<class BIN>
using Collection = ChildCollectionMutator<PAR,BIN>;
using WireTap = TestWireTap<PAR>;
settle on a concrete implementation approach based on inheritance chain After some reconsideration, I decide to stick to the approach with the closures, but to use a metaprotramming technique to build an inheritance chain. While I can not decide on the real world impact of storing all those closures, in theory this approach should enable the compiler to remove all of the storage overhead. Since, when storing the result into an auto variable right within scope (as demonstrated in the test), the compiler sees the concrete type and might be able to boil down the actual generated virtual function implementations, thereby inlining the given closures. Whereas, on the other hand, if we'd go the obvious conventional route and place the closures into a Map allocated on the stack, I wouldn't expect the compiler to do data flow analysis to prove this allocation is not necessary and inline it away. NOTE: there is now guarantee this inlining trick will ever work. And, moreover, we don't know anything regarding the runtime effect. The whole picture is way more involved as it might seem at first sight. Even if we go the completely conventional route and require every participating object to supply an implementation of some kind of "Serializable" interface, we'll end up with a (hand written!) implementation class for each participating setup, which takes up space in the code segment of the executable. While the closure based approach chosen here, consumes data segment (or heap) space per instance for the functors (or function pointers) representing the closures, plus code segment space for the closures, but the latter with a way higher potential for inlining, since the closure code and the generated virtual functions are necessarily emitted within the same compilation unit and within a local (inline, not publickly exposed) scope.
2015-04-05 18:26:49 +02:00
/* ==== binding API ==== */
template<typename CLO>
Builder<Change<CLO>>
change (Literal attributeID, CLO closure)
settle on a concrete implementation approach based on inheritance chain After some reconsideration, I decide to stick to the approach with the closures, but to use a metaprotramming technique to build an inheritance chain. While I can not decide on the real world impact of storing all those closures, in theory this approach should enable the compiler to remove all of the storage overhead. Since, when storing the result into an auto variable right within scope (as demonstrated in the test), the compiler sees the concrete type and might be able to boil down the actual generated virtual function implementations, thereby inlining the given closures. Whereas, on the other hand, if we'd go the obvious conventional route and place the closures into a Map allocated on the stack, I wouldn't expect the compiler to do data flow analysis to prove this allocation is not necessary and inline it away. NOTE: there is now guarantee this inlining trick will ever work. And, moreover, we don't know anything regarding the runtime effect. The whole picture is way more involved as it might seem at first sight. Even if we go the completely conventional route and require every participating object to supply an implementation of some kind of "Serializable" interface, we'll end up with a (hand written!) implementation class for each participating setup, which takes up space in the code segment of the executable. While the closure based approach chosen here, consumes data segment (or heap) space per instance for the functors (or function pointers) representing the closures, plus code segment space for the closures, but the latter with a way higher potential for inlining, since the closure code and the generated virtual functions are necessarily emitted within the same compilation unit and within a local (inline, not publickly exposed) scope.
2015-04-05 18:26:49 +02:00
{
return Change<CLO> (attributeID, closure, *this);
settle on a concrete implementation approach based on inheritance chain After some reconsideration, I decide to stick to the approach with the closures, but to use a metaprotramming technique to build an inheritance chain. While I can not decide on the real world impact of storing all those closures, in theory this approach should enable the compiler to remove all of the storage overhead. Since, when storing the result into an auto variable right within scope (as demonstrated in the test), the compiler sees the concrete type and might be able to boil down the actual generated virtual function implementations, thereby inlining the given closures. Whereas, on the other hand, if we'd go the obvious conventional route and place the closures into a Map allocated on the stack, I wouldn't expect the compiler to do data flow analysis to prove this allocation is not necessary and inline it away. NOTE: there is now guarantee this inlining trick will ever work. And, moreover, we don't know anything regarding the runtime effect. The whole picture is way more involved as it might seem at first sight. Even if we go the completely conventional route and require every participating object to supply an implementation of some kind of "Serializable" interface, we'll end up with a (hand written!) implementation class for each participating setup, which takes up space in the code segment of the executable. While the closure based approach chosen here, consumes data segment (or heap) space per instance for the functors (or function pointers) representing the closures, plus code segment space for the closures, but the latter with a way higher potential for inlining, since the closure code and the generated virtual functions are necessarily emitted within the same compilation unit and within a local (inline, not publickly exposed) scope.
2015-04-05 18:26:49 +02:00
}
2016-03-18 20:52:35 +01:00
/** set up a binding to a structure of "child objects",
* implemented through a typical STL container
* @param collectionBindingSetup as created by invoking a nested DSL,
* initiated by a builder function `collection(implRef)`, where `implRef`
* is a (language) reference to a STL compliant container existing somewhere
* within the otherwise opaque implementation. The type of the container and
* thus the type of the elements will be picked up, and the returned builder
2016-03-25 21:40:30 +01:00
* can be further outfitted with the builder methods, which take lambdas as
2016-03-18 20:52:35 +01:00
* callback into the implementation.
*/
template<typename BIN>
Builder<Collection<BIN>>
attach (BIN&& collectionBindingSetup)
{
return Collection<BIN> (std::forward<BIN>(collectionBindingSetup), *this);
}
2016-03-18 20:52:35 +01:00
/** set up a diagnostic layer, binding to TestMutationTarget.
* This can be used to monitor the behaviour of the resulting TreeMutator for tests.
*/
Builder<WireTap>
attachDummy (TestMutationTarget& dummy);
settle on a concrete implementation approach based on inheritance chain After some reconsideration, I decide to stick to the approach with the closures, but to use a metaprotramming technique to build an inheritance chain. While I can not decide on the real world impact of storing all those closures, in theory this approach should enable the compiler to remove all of the storage overhead. Since, when storing the result into an auto variable right within scope (as demonstrated in the test), the compiler sees the concrete type and might be able to boil down the actual generated virtual function implementations, thereby inlining the given closures. Whereas, on the other hand, if we'd go the obvious conventional route and place the closures into a Map allocated on the stack, I wouldn't expect the compiler to do data flow analysis to prove this allocation is not necessary and inline it away. NOTE: there is now guarantee this inlining trick will ever work. And, moreover, we don't know anything regarding the runtime effect. The whole picture is way more involved as it might seem at first sight. Even if we go the completely conventional route and require every participating object to supply an implementation of some kind of "Serializable" interface, we'll end up with a (hand written!) implementation class for each participating setup, which takes up space in the code segment of the executable. While the closure based approach chosen here, consumes data segment (or heap) space per instance for the functors (or function pointers) representing the closures, plus code segment space for the closures, but the latter with a way higher potential for inlining, since the closure code and the generated virtual functions are necessarily emitted within the same compilation unit and within a local (inline, not publickly exposed) scope.
2015-04-05 18:26:49 +02:00
};
2016-03-18 20:52:35 +01:00
}//(END) Mutator-Builder...
settle on a concrete implementation approach based on inheritance chain After some reconsideration, I decide to stick to the approach with the closures, but to use a metaprotramming technique to build an inheritance chain. While I can not decide on the real world impact of storing all those closures, in theory this approach should enable the compiler to remove all of the storage overhead. Since, when storing the result into an auto variable right within scope (as demonstrated in the test), the compiler sees the concrete type and might be able to boil down the actual generated virtual function implementations, thereby inlining the given closures. Whereas, on the other hand, if we'd go the obvious conventional route and place the closures into a Map allocated on the stack, I wouldn't expect the compiler to do data flow analysis to prove this allocation is not necessary and inline it away. NOTE: there is now guarantee this inlining trick will ever work. And, moreover, we don't know anything regarding the runtime effect. The whole picture is way more involved as it might seem at first sight. Even if we go the completely conventional route and require every participating object to supply an implementation of some kind of "Serializable" interface, we'll end up with a (hand written!) implementation class for each participating setup, which takes up space in the code segment of the executable. While the closure based approach chosen here, consumes data segment (or heap) space per instance for the functors (or function pointers) representing the closures, plus code segment space for the closures, but the latter with a way higher potential for inlining, since the closure code and the generated virtual functions are necessarily emitted within the same compilation unit and within a local (inline, not publickly exposed) scope.
2015-04-05 18:26:49 +02:00
Builder<TreeMutator>
TreeMutator::build ()
{
return TreeMutator();
settle on a concrete implementation approach based on inheritance chain After some reconsideration, I decide to stick to the approach with the closures, but to use a metaprotramming technique to build an inheritance chain. While I can not decide on the real world impact of storing all those closures, in theory this approach should enable the compiler to remove all of the storage overhead. Since, when storing the result into an auto variable right within scope (as demonstrated in the test), the compiler sees the concrete type and might be able to boil down the actual generated virtual function implementations, thereby inlining the given closures. Whereas, on the other hand, if we'd go the obvious conventional route and place the closures into a Map allocated on the stack, I wouldn't expect the compiler to do data flow analysis to prove this allocation is not necessary and inline it away. NOTE: there is now guarantee this inlining trick will ever work. And, moreover, we don't know anything regarding the runtime effect. The whole picture is way more involved as it might seem at first sight. Even if we go the completely conventional route and require every participating object to supply an implementation of some kind of "Serializable" interface, we'll end up with a (hand written!) implementation class for each participating setup, which takes up space in the code segment of the executable. While the closure based approach chosen here, consumes data segment (or heap) space per instance for the functors (or function pointers) representing the closures, plus code segment space for the closures, but the latter with a way higher potential for inlining, since the closure code and the generated virtual functions are necessarily emitted within the same compilation unit and within a local (inline, not publickly exposed) scope.
2015-04-05 18:26:49 +02:00
}
}} // namespace lib::diff
#endif /*LIB_DIFF_TREE_MUTATOR_H*/