lumiera_/tests/library/diff/generic-tree-mutator-test.cpp

147 lines
3.5 KiB
C++
Raw Normal View History

/*
GenericTreeMutator(Test) - customisable intermediary to abstract tree changing operations
Copyright (C) Lumiera.org
2015, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* *****************************************************/
#include "lib/test/run.hpp"
#include "lib/test/test-helper.hpp"
#include "lib/diff/tree-mutator.hpp"
#include "lib/util.hpp"
//#include <utility>
#include <string>
//#include <vector>
#include <iostream>
using util::isnil;
using std::string;
//using std::vector;
//using std::swap;
using std::cout;
using std::endl;
settle on a concrete implementation approach based on inheritance chain After some reconsideration, I decide to stick to the approach with the closures, but to use a metaprotramming technique to build an inheritance chain. While I can not decide on the real world impact of storing all those closures, in theory this approach should enable the compiler to remove all of the storage overhead. Since, when storing the result into an auto variable right within scope (as demonstrated in the test), the compiler sees the concrete type and might be able to boil down the actual generated virtual function implementations, thereby inlining the given closures. Whereas, on the other hand, if we'd go the obvious conventional route and place the closures into a Map allocated on the stack, I wouldn't expect the compiler to do data flow analysis to prove this allocation is not necessary and inline it away. NOTE: there is now guarantee this inlining trick will ever work. And, moreover, we don't know anything regarding the runtime effect. The whole picture is way more involved as it might seem at first sight. Even if we go the completely conventional route and require every participating object to supply an implementation of some kind of "Serializable" interface, we'll end up with a (hand written!) implementation class for each participating setup, which takes up space in the code segment of the executable. While the closure based approach chosen here, consumes data segment (or heap) space per instance for the functors (or function pointers) representing the closures, plus code segment space for the closures, but the latter with a way higher potential for inlining, since the closure code and the generated virtual functions are necessarily emitted within the same compilation unit and within a local (inline, not publickly exposed) scope.
2015-04-05 18:26:49 +02:00
using lib::test::showType;
using lib::test::demangleCxx;
namespace lib {
namespace diff{
namespace test{
// using lumiera::error::LUMIERA_ERROR_LOGIC;
namespace {//Test fixture....
}//(End)Test fixture
/*****************************************************************************//**
* @test Demonstrate a customisable component for flexible bindings
* to enable generic tree changing and mutating operations to
* arbitrary hierarchical data structures.
*
* @see TreeMutator
* @see GenNodeBasic_test
* @see GenNodeBasic_test
* @see GenericTreeRepresentation_test
*/
class GenericTreeMutator_test : public Test
{
virtual void
run (Arg)
{
simpleAttributeBinding();
verifySnapshot();
sequenceIteration();
duplicateDetection();
copy_and_move();
}
void
simpleAttributeBinding()
{
string localData;
settle on a concrete implementation approach based on inheritance chain After some reconsideration, I decide to stick to the approach with the closures, but to use a metaprotramming technique to build an inheritance chain. While I can not decide on the real world impact of storing all those closures, in theory this approach should enable the compiler to remove all of the storage overhead. Since, when storing the result into an auto variable right within scope (as demonstrated in the test), the compiler sees the concrete type and might be able to boil down the actual generated virtual function implementations, thereby inlining the given closures. Whereas, on the other hand, if we'd go the obvious conventional route and place the closures into a Map allocated on the stack, I wouldn't expect the compiler to do data flow analysis to prove this allocation is not necessary and inline it away. NOTE: there is now guarantee this inlining trick will ever work. And, moreover, we don't know anything regarding the runtime effect. The whole picture is way more involved as it might seem at first sight. Even if we go the completely conventional route and require every participating object to supply an implementation of some kind of "Serializable" interface, we'll end up with a (hand written!) implementation class for each participating setup, which takes up space in the code segment of the executable. While the closure based approach chosen here, consumes data segment (or heap) space per instance for the functors (or function pointers) representing the closures, plus code segment space for the closures, but the latter with a way higher potential for inlining, since the closure code and the generated virtual functions are necessarily emitted within the same compilation unit and within a local (inline, not publickly exposed) scope.
2015-04-05 18:26:49 +02:00
auto mutator =
TreeMutator::build()
.change<string>("data", [&](string val)
settle on a concrete implementation approach based on inheritance chain After some reconsideration, I decide to stick to the approach with the closures, but to use a metaprotramming technique to build an inheritance chain. While I can not decide on the real world impact of storing all those closures, in theory this approach should enable the compiler to remove all of the storage overhead. Since, when storing the result into an auto variable right within scope (as demonstrated in the test), the compiler sees the concrete type and might be able to boil down the actual generated virtual function implementations, thereby inlining the given closures. Whereas, on the other hand, if we'd go the obvious conventional route and place the closures into a Map allocated on the stack, I wouldn't expect the compiler to do data flow analysis to prove this allocation is not necessary and inline it away. NOTE: there is now guarantee this inlining trick will ever work. And, moreover, we don't know anything regarding the runtime effect. The whole picture is way more involved as it might seem at first sight. Even if we go the completely conventional route and require every participating object to supply an implementation of some kind of "Serializable" interface, we'll end up with a (hand written!) implementation class for each participating setup, which takes up space in the code segment of the executable. While the closure based approach chosen here, consumes data segment (or heap) space per instance for the functors (or function pointers) representing the closures, plus code segment space for the closures, but the latter with a way higher potential for inlining, since the closure code and the generated virtual functions are necessarily emitted within the same compilation unit and within a local (inline, not publickly exposed) scope.
2015-04-05 18:26:49 +02:00
{
cout << "\"data\" closure received something "<<val<<endl;
settle on a concrete implementation approach based on inheritance chain After some reconsideration, I decide to stick to the approach with the closures, but to use a metaprotramming technique to build an inheritance chain. While I can not decide on the real world impact of storing all those closures, in theory this approach should enable the compiler to remove all of the storage overhead. Since, when storing the result into an auto variable right within scope (as demonstrated in the test), the compiler sees the concrete type and might be able to boil down the actual generated virtual function implementations, thereby inlining the given closures. Whereas, on the other hand, if we'd go the obvious conventional route and place the closures into a Map allocated on the stack, I wouldn't expect the compiler to do data flow analysis to prove this allocation is not necessary and inline it away. NOTE: there is now guarantee this inlining trick will ever work. And, moreover, we don't know anything regarding the runtime effect. The whole picture is way more involved as it might seem at first sight. Even if we go the completely conventional route and require every participating object to supply an implementation of some kind of "Serializable" interface, we'll end up with a (hand written!) implementation class for each participating setup, which takes up space in the code segment of the executable. While the closure based approach chosen here, consumes data segment (or heap) space per instance for the functors (or function pointers) representing the closures, plus code segment space for the closures, but the latter with a way higher potential for inlining, since the closure code and the generated virtual functions are necessarily emitted within the same compilation unit and within a local (inline, not publickly exposed) scope.
2015-04-05 18:26:49 +02:00
localData = val;
});
cout << "concrete TreeMutator size=" << sizeof(mutator)
<< " type="<< demangleCxx (showType (mutator))
<< endl;
CHECK (isnil (localData));
Attribute testAttribute(string ("that would be acceptable"));
mutator.setAttribute ("lore", testAttribute);
CHECK ( isnil (localData)); // nothing happens, nothing changed
mutator.setAttribute ("data", testAttribute);
settle on a concrete implementation approach based on inheritance chain After some reconsideration, I decide to stick to the approach with the closures, but to use a metaprotramming technique to build an inheritance chain. While I can not decide on the real world impact of storing all those closures, in theory this approach should enable the compiler to remove all of the storage overhead. Since, when storing the result into an auto variable right within scope (as demonstrated in the test), the compiler sees the concrete type and might be able to boil down the actual generated virtual function implementations, thereby inlining the given closures. Whereas, on the other hand, if we'd go the obvious conventional route and place the closures into a Map allocated on the stack, I wouldn't expect the compiler to do data flow analysis to prove this allocation is not necessary and inline it away. NOTE: there is now guarantee this inlining trick will ever work. And, moreover, we don't know anything regarding the runtime effect. The whole picture is way more involved as it might seem at first sight. Even if we go the completely conventional route and require every participating object to supply an implementation of some kind of "Serializable" interface, we'll end up with a (hand written!) implementation class for each participating setup, which takes up space in the code segment of the executable. While the closure based approach chosen here, consumes data segment (or heap) space per instance for the functors (or function pointers) representing the closures, plus code segment space for the closures, but the latter with a way higher potential for inlining, since the closure code and the generated virtual functions are necessarily emitted within the same compilation unit and within a local (inline, not publickly exposed) scope.
2015-04-05 18:26:49 +02:00
CHECK (!isnil (localData));
cout << "localData changed to: "<<localData<<endl;
CHECK (localData == "that would be acceptable");
}
void
verifySnapshot()
{
}
void
sequenceIteration()
{
}
void
duplicateDetection()
{
}
void
copy_and_move()
{
}
};
/** Register this test class... */
LAUNCHER (GenericTreeMutator_test, "unit common");
}}} // namespace lib::diff::test