integrate recent GUI / timeline and work done on the player subsystem

This commit is contained in:
Fischlurch 2011-11-27 02:15:11 +01:00
commit c7d4412cec
48 changed files with 4035 additions and 896 deletions

View file

@ -90,6 +90,9 @@ NOBUG_CPP_DEFINE_FLAG_PARENT ( fileheader_dbg, backend_dbg);
/** base of debug logging for the proc layer */
NOBUG_CPP_DEFINE_FLAG_PARENT ( proc_dbg, debugging);
NOBUG_CPP_DEFINE_FLAG_PARENT ( command_dbg, proc_dbg);
NOBUG_CPP_DEFINE_FLAG_PARENT ( session_dbg, proc_dbg);
NOBUG_CPP_DEFINE_FLAG_PARENT ( player_dbg, proc_dbg);
NOBUG_CPP_DEFINE_FLAG_PARENT ( engine_dbg, proc_dbg);
/** base of debug logging for the gui */
NOBUG_CPP_DEFINE_FLAG_PARENT ( gui_dbg, debugging);
/** base if debug logging for the support library */

View file

@ -215,6 +215,27 @@ namespace lumiera {
} // namespace lumiera
/******************************************************
* convenience shortcut for a sequence of catch blocks
* just logging and consuming an error. Typically
* this sequence will be used within destructors,
* which, by convention, must not throw
*/
#define ERROR_LOG_AND_IGNORE(_FLAG_,_OP_DESCR_) \
catch (std::exception& problem) \
{ \
const char* errID = lumiera_error(); \
WARN (_FLAG_, "%s failed: %s", _OP_DESCR_, problem.what()); \
TRACE (debugging, "Error flag was: %s", errID);\
} \
catch (...) \
{ \
const char* errID = lumiera_error(); \
ERROR (_FLAG_, "%s failed with unknown exception; " \
"error flag is: %s" \
, _OP_DESCR_, errID); \
}
/******************************************************

View file

@ -22,13 +22,14 @@
/** @file iter-adapter.hpp
** Helper template(s) for creating <b>lumiera forward iterators</b>.
** These are the foundation to build up iterator like types from scratch.
** Usually, these templates will be created and provided by a custom
** container type and accessed by the client through a typedef name
** "iterator" (similar to the usage within the STL). For more advanced
** usage, the providing container might want to subclass these iterators,
** e.g. to provide an additional, specialised API.
**
** Depending on the concrete situation, there are several flavours
** Depending on the concrete situation, several flavours are provided:
** - the IterAdapter retains an active callback connection to the
** controlling container, thus allowing arbitrary complex behaviour.
** - the RangeIter allows just to expose a range of elements defined
@ -36,21 +37,32 @@
** - often, objects are managed internally by pointers, while allowing
** the clients to use direct references; to support this usage scenario,
** PtrDerefIter wraps an existing iterator, while dereferencing any value
** automatically on access.
** automatically on access.
**
** There are many further ways of yielding a Lumiera forward iterator.
** For example, lib::IterSource builds a "iterable" source of data elements,
** while hiding the actual container or generator implementation behind a
** vtable call. Besides, there are adapters for the most common usages
** with STL containers, and such iterators can also be combined and
** extended with the help of itertools.hpp
**
** Basically every class in compliance with our specific iterator concept
** can be used as a building block in this framework.
**
**
** \par Lumiera forward iterator concept
**
** Similar to the STL, instead of using a common "Iterator" base class,
** instead we define a common set of functions and behaviour which can
** we rather define a common set of functions and behaviour which can
** be expected from any such iterator. These rules are similar to STL's
** "forward iterator", with the addition of an bool check to detect
** iteration end. The latter s inspired by the \c hasNext() function
** iteration end. The latter is inspired by the \c hasNext() function
** found in many current languages supporting iterators. In a similar
** vein (inspired from functional programming), we deliberately don't
** support the various extended iterator concepts from STL and boost
** (random access iterators, output iterators and the like). According
** to this concept, <i>an iterator is a promise for pulling values,</i>
** (random access iterators, output iterators, arithmetics, difference
** between iterators and the like). According to this concept,
** <i>an iterator is a promise for pulling values,</i>
** and nothing beyond that.
**
** - Any Lumiera forward iterator can be in a "exhausted" (invalid) state,
@ -58,7 +70,7 @@
** created by the default ctor is always fixed to that state. This
** state is final and can't be reset, meaning that any iterator is
** a disposable one-way-off object.
** - iterators are copyable and comparable
** - iterators are copyable and equality comparable
** - when an iterator is \em not in the exhausted state, it may be
** \em dereferenced to yield the "current" value.
** - moreover, iterators may be incremented until exhaustion.
@ -408,7 +420,7 @@ namespace lib {
typedef typename RemovePtr<TY>::Type ValueType;
template<class T2>
struct SimilarIter
struct SimilarIter ///< rebind to a similarly structured Iterator with value type T2
{
typedef Iter<T2,CON> Type;
};
@ -445,7 +457,12 @@ namespace lib {
public:
typedef typename IT::value_type pointer;
typedef typename RemovePtr<pointer>::Type value_type;
typedef value_type& reference;
typedef value_type& reference;
// for use with STL algorithms
typedef void difference_type;
typedef std::forward_iterator_tag iterator_category;
// the purpose of the following typedefs is to ease building a correct "const iterator"
@ -462,6 +479,7 @@ namespace lib {
/** PtrDerefIter is always created
* by wrapping an existing iterator.
*/
explicit
PtrDerefIter (IT srcIter)
: i_(srcIter)
{ }
@ -485,9 +503,31 @@ namespace lib {
operator= (PtrDerefIter<WrappedIterType> const& ref)
{
i_ = reinterpret_cast<IT const&> (ref.getBase());
return *this;
}
/** explicit builder to allow creating a const variant from the basic srcIter type.
* Again, the reason necessitating this "backdoor" is that we want to swallow one level
* of indirection. Generally speaking \code const T ** \endcode is not the same as
* \code T * const * \endcode, but in our specific case the API ensures that a
* PtrDerefIter<WrappedConstIterType> only exposes const elements.
*/
static PtrDerefIter
build_by_cast (WrappedIterType const& srcIter)
{
return PtrDerefIter (reinterpret_cast<IT const&> (srcIter));
}
static PtrDerefIter
nil()
{
return PtrDerefIter (IT());
}
/* === lumiera forward iterator concept === */

122
src/lib/maybe.hpp Normal file
View file

@ -0,0 +1,122 @@
/*
MAYBE.hpp - dealing with optional values
Copyright (C) Lumiera.org
2011, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/** @file maybe.hpp
** Support for representation of optional values.
** This implements a concept ("option monad") known from functional programming,
** allowing to express the fact of some value possibly be unavailable. Using this
** approach allows to avoid the dangerous technique of (ab)using NULL pointers to
** represent missing values.
**
** While a NULL pointer carries this special meaning just by convention, marking a
** parameter or return value as optional states this fact first class, and enforces
** the necessary "is available" check through the type system. Surprisingly, this
** leads not only to more secure, but also much more compact code, as we're now
** able to substitute a fallback just by a "or else use this" clause.
** Basically, there are different ways to access the actual value
** - access through implicit conversion raises an exception for missing values
** - evaluation as boolean allows to check, if the value is available
** - an alternative or fallback value may be attached.
**
** @todo WIP and rather brainstorming as of 2/10
**
** @see backend::ThreadJob usage example
*/
#ifndef LIB_MAYBE_H
#define LIB_MAYBE_H
//#include "pre.hpp"
#include "lib/error.hpp"
//#include "lib/wrapper.hpp"
#include "lib/util.hpp"
#include <string>
namespace lib {
using util::isnil;
using std::string;
namespace error = lumiera::error;
namespace maybe {
}
/**
* A value, which might be unavailable
* @throw error::State on any attempt to access a missing value
* without prior checking the availability
*/
template<typename VAL>
class Maybe
{
VAL value_;
public:
/** mark an invalid/failed result */
Maybe ()
{ }
/** standard case: valid result */
Maybe (VAL const& value)
: value_(value)
{ }
bool
isValid() const
{
UNIMPLEMENTED ("check if optional value is available");
}
void
maybeThrow(Literal explanation =0) const
{
if (!isValid())
throw error::State (explanation.empty()? "optional value not available" : string(explanation),
error::LUMIERA_ERROR_BOTTOM_VALUE);
}
VAL
get() const
{
maybeThrow();
return value_;
}
};
} // namespace lib
#endif

View file

@ -48,8 +48,10 @@
#include "include/logging.h"
#include "lib/iter-adapter.hpp"
#include "lib/error.hpp"
#include "lib/util.hpp"
#include <vector>
#include <algorithm>
#include <boost/noncopyable.hpp>
@ -123,22 +125,43 @@ namespace lib {
} }
/** withdraw responsibility for a specific object.
* This object will be removed form this collection
* and returned as-is; it won't be deleted when the
* ScopedPtrVect goes out of scope.
* @param obj address of the object in question.
* @return pointer to the object, if found.
* Otherwise, NULL will be returned and the
* collection of managed objects remains unaltered
* @note EX_STRONG
* @todo TICKET #856 better return a Maybe<T&> instead of a pointer?
*/
T*
detach (void* objAddress)
{
T* extracted = static_cast<T*> (objAddress);
VIter pos = std::find (_Vec::begin(),_Vec::end(), extracted);
if (pos != _Vec::end() && bool(*pos))
{
extracted = *pos;
_Vec::erase(pos); // EX_STRONG
return extracted;
}
return NULL;
}
void
clear()
{
VIter e = _Vec::end();
for (VIter i = _Vec::begin(); i!=e; ++i)
{
if (*i)
try
{
delete *i;
*i = 0;
}
catch(std::exception& ex)
{
WARN (library, "Problem while deallocating ScopedPtrVect: %s", ex.what());
} }
if (*i)
try {
delete *i;
*i = 0;
}
ERROR_LOG_AND_IGNORE (library, "Clean-up of ScopedPtrVect");
_Vec::clear();
}
@ -155,9 +178,9 @@ namespace lib {
typedef ConstIterType const_iterator;
iterator begin() { return iterator (allPtrs()); }
const_iterator begin() const { return const_iterator (allPtrs()); }
iterator end() { return iterator ( RIter() ); }
const_iterator end() const { return const_iterator (RcIter() ); }
const_iterator begin() const { return const_iterator::build_by_cast (allPtrs()); }
const_iterator end() const { return const_iterator::nil(); }
@ -188,6 +211,12 @@ namespace lib {
{
return RIter (_Vec::begin(), _Vec::end());
}
RIter
allPtrs () const
{
_Vec& elements = util::unConst(*this);
return RIter (elements.begin(), elements.end());
}
};

View file

@ -22,6 +22,7 @@
#include "lib/test/test-helper.hpp"
#include "lib/test/testdummy.hpp"
#include <boost/format.hpp>
@ -55,7 +56,12 @@ namespace test{
garbage[--p] = alpha[rand() % MAXAL];
return garbage;
}
/** storage for testdummy flags */
long Dummy::_local_checksum = 0;
bool Dummy::_throw_in_ctor = false;
}} // namespace lib::test

View file

@ -0,0 +1,99 @@
/*
TESTDUMMY.hpp - yet another test dummy for tracking ctor/dtor calls
Copyright (C) Lumiera.org
2008, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* *****************************************************/
#include <boost/noncopyable.hpp>
#include <algorithm>
namespace lib {
namespace test{
class Dummy
: boost::noncopyable
{
int val_;
/** to verify ctor/dtor calls */
static long _local_checksum;
static bool _throw_in_ctor;
public:
Dummy ()
: val_(1 + (rand() % 100000000))
{ init(); }
Dummy (int v)
: val_(v)
{ init(); }
~Dummy()
{
checksum() -= val_;
}
long add (int i) { return val_+i; }
int getVal() const { return val_; }
void
setVal (int newVal)
{
checksum() += newVal - val_;
val_ = newVal;
}
friend void
swap (Dummy& dum1, Dummy& dum2) ///< checksum neutral
{
std::swap(dum1.val_, dum2.val_);
}
static long&
checksum()
{
return _local_checksum;
}
static void
activateCtorFailure(bool indeed =true)
{
_throw_in_ctor = indeed;
}
private:
void
init()
{
checksum() += val_;
if (_throw_in_ctor)
throw val_;
}
};
}} // namespace lib::test

View file

@ -0,0 +1,128 @@
/*
BUFFER-LOCAL-KEY.hpp - opaque data for BufferProvider implementation
Copyright (C) Lumiera.org
2011, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#ifndef PROC_ENGINE_BUFFR_LOCAL_KEY_H
#define PROC_ENGINE_BUFFR_LOCAL_KEY_H
#include "lib/error.hpp"
#include <boost/functional/hash.hpp>
namespace lib {
typedef size_t HashVal;
}
namespace engine {
namespace metadata {
class Key;
class Entry;
}
class BufferMetadata;
using lib::HashVal;
/**
* an opaque ID to be used by the BufferProvider implementation.
* Typically this will be used, to set apart some pre-registered
* kinds of buffers. It is treated as being part of the buffer type.
* LocalKey objects may be copied but not re-assigned or changed.
*/
class LocalKey
{
union OpaqueData
{
uint64_t _as_number;
void* _as_pointer;
};
OpaqueData privateID_;
public:
explicit
LocalKey (uint64_t opaqueValue=0)
{
privateID_._as_number = opaqueValue;
}
LocalKey (void* impl_related_ptr)
{
privateID_._as_number = 0;
privateID_._as_pointer = impl_related_ptr;
}
operator uint64_t() const
{
return privateID_._as_number;
}
operator void*() const
{
return privateID_._as_pointer;
}
bool
isDefined() const
{
return bool(privateID_._as_number);
}
friend size_t
hash_value (LocalKey const& lkey)
{
boost::hash<uint64_t> hashFunction;
return hashFunction(lkey.privateID_._as_number);
}
friend bool
operator== (LocalKey const& left, LocalKey const& right)
{
return uint64_t(left) == uint64_t(right);
}
friend bool
operator!= (LocalKey const& left, LocalKey const& right)
{
return uint64_t(left) != uint64_t(right);
}
private:
/** assignment usually prohibited */
LocalKey& operator= (LocalKey const& o)
{
privateID_ = o.privateID_;
return *this;
}
/** but Key assignments are acceptable */
friend class metadata::Key;
};
} // namespace engine
#endif

View file

@ -31,7 +31,7 @@
** - that overall storage size available within the buffer
** - a pair of custom \em creator and \em destructor functions to use together with this buffer
** - an additional client key to distinguish otherwise otherwise identical client requests
** These three distinctions are applied in sequence, thus forming a tree with 3 levels.
** These three distinctions are applied in sequence, thus forming a type tree with 3 levels.
** Only the first distinguishing level (the size) is mandatory. The others are provided,
** because some of the foreseeable buffer providers allow to re-access the data placed
** into the buffer, by assigning an internally managed ID to the buffer. The most
@ -56,10 +56,12 @@
#include "lib/error.hpp"
#include "lib/symbol.hpp"
#include "lib/functor-util.hpp"
#include "lib/util-foreach.hpp"
#include "include/logging.h"
#include "proc/engine/type-handler.hpp"
#include "proc/engine/buffer-local-key.hpp"
#include <tr1/functional>
#include <boost/functional/hash.hpp>
#include <tr1/unordered_map>
#include <boost/noncopyable.hpp>
@ -67,9 +69,7 @@ namespace engine {
using lib::HashVal;
using lib::Literal;
using std::tr1::bind;
using std::tr1::function;
using std::tr1::placeholders::_1;
using util::for_each;
namespace error = lumiera::error;
@ -77,165 +77,27 @@ namespace engine {
class Key;
class Entry;
}
class BufferMetadata;
/**
* Buffer states
* usable within BufferProvider
* and stored within the metadata
*/
enum BufferState
{ NIL,
FREE,
LOCKED,
EMITTED,
BLOCKED
{ NIL, ///< abstract entry, not yet allocated
FREE, ///< allocated buffer, no longer in use
LOCKED, ///< allocated buffer actively in use
EMITTED, ///< allocated buffer, returned from client
BLOCKED ///< allocated buffer blocked by protocol failure
};
/**
* an opaque ID to be used by the BufferProvider implementation.
* Typically this will be used, to set apart some pre-registered
* kinds of buffers. It is treated as being part of the buffer type.
* LocalKey objects may be copied but not re-assigned or changed.
*/
class LocalKey
{
uint64_t privateID_;
public:
LocalKey (uint64_t opaqueValue=0)
: privateID_(opaqueValue)
{ }
operator uint64_t() const { return privateID_; }
friend size_t
hash_value (LocalKey const& lkey)
{
boost::hash<uint64_t> hashFunction;
return hashFunction(lkey.privateID_);
}
private:
/** assignment usually prohibited */
LocalKey& operator= (LocalKey const& o)
{
privateID_ = o.privateID_;
return *this;
}
/** but Key assignments are acceptable */
friend class metadata::Key;
};
namespace { // Helpers for construction within the buffer...
template<class X>
inline void
buildIntoBuffer (void* storageBuffer)
{
new(storageBuffer) X();
}
template<class X, typename A1>
inline void
buildIntoBuffer_A1 (void* storageBuffer, A1 arg1)
{
new(storageBuffer) X(arg1);
}
template<class X>
inline void
destroyInBuffer (void* storageBuffer)
{
X* embedded = static_cast<X*> (storageBuffer);
embedded->~X();
}
}//(End)placement-new helpers
/**
* A pair of functors to maintain a datastructure within the buffer.
* TypeHandler describes how to outfit the buffer in a specific way.
* When defined, the buffer will be prepared when locking and cleanup
* will be invoked automatically when releasing. Especially, this
* can be used to \em attach an object to the buffer (placement-new)
*/
struct TypeHandler
{
typedef function<void(void*)> DoInBuffer;
DoInBuffer createAttached;
DoInBuffer destroyAttached;
/** build an invalid NIL TypeHandler */
TypeHandler()
: createAttached()
, destroyAttached()
{ }
/** build a TypeHandler
* binding to arbitrary constructor and destructor functions.
* On invocation, these functions get a void* to the buffer.
* @note the functor objects created from these operations
* might be shared for handling multiple buffers.
* Be careful with any state or arguments.
*/
template<typename CTOR, typename DTOR>
TypeHandler(CTOR ctor, DTOR dtor)
: createAttached (ctor)
, destroyAttached (dtor)
{ }
/** builder function defining a TypeHandler
* to place a default-constructed object
* into the buffer. */
template<class X>
static TypeHandler
create ()
{
return TypeHandler (buildIntoBuffer<X>, destroyInBuffer<X>);
}
template<class X, typename A1>
static TypeHandler
create (A1 a1)
{
return TypeHandler ( bind (buildIntoBuffer_A1<X,A1>, _1, a1)
, destroyInBuffer<X>);
}
bool
isValid() const
{
return bool(createAttached)
&& bool(destroyAttached);
}
friend HashVal
hash_value (TypeHandler const& handler)
{
HashVal hash(0);
if (handler.isValid())
{
boost::hash_combine(hash, handler.createAttached);
boost::hash_combine(hash, handler.destroyAttached);
}
return hash;
}
friend bool
operator== (TypeHandler const& left, TypeHandler const& right)
{
return (!left.isValid() && !right.isValid())
|| ( util::rawComparison(left.createAttached, right.createAttached)
&& util::rawComparison(left.destroyAttached, right.destroyAttached)
);
}
friend bool
operator!= (TypeHandler const& left, TypeHandler const& right)
{
return !(left == right);
}
};
namespace { // internal constants to mark the default case
@ -258,7 +120,7 @@ namespace engine {
/* === Implementation === */
/* === Metadata Implementation === */
namespace metadata {
@ -275,6 +137,13 @@ namespace engine {
}
}
/**
* Description of a Buffer-"type".
* Key elements will be used to generate hash IDs,
* to be embedded into a BufferDescriptor.
* Keys are chained hierarchically.
*/
class Key
{
HashVal parent_;
@ -342,27 +211,108 @@ namespace engine {
{ }
/** build derived Key for a concrete buffer Entry
* @param parent type key to subsume this buffer
* @param bufferAddr pointer to the concrete buffer
* @return Child key with hashID based on the buffer address.
* For NULL buffer a copy of the parent is returned.
*/
static Key
forEntry (Key const& parent, const void* bufferAddr, LocalKey const& implID =UNSPECIFIC)
{
Key newKey(parent);
if (bufferAddr)
{
newKey.parent_ = HashVal(parent);
newKey.hashID_ = chainedHash(parent, bufferAddr);
if (nontrivial(implID))
{
REQUIRE (!newKey.specifics_.isDefined(),
"Implementation defined local key should not be overridden. "
"Underlying buffer type already defines a nontrivial LocalKey");
newKey.specifics_ = implID;
} }
return newKey;
}
void
useTypeHandlerFrom (Key const& ref)
{
if (nontrivial(this->instanceFunc_))
throw error::Logic ("unable to supersede an already attached TypeHandler"
, LUMIERA_ERROR_LIFECYCLE);
instanceFunc_ = ref.instanceFunc_;
}
LocalKey const& localKey() const { return specifics_;}
size_t storageSize() const { return storageSize_; }
HashVal parentKey() const { return parent_;}
operator HashVal() const { return hashID_;}
};
/**
* A complete metadata Entry, based on a Key.
* This special Key element usually describes an actual Buffer.
* Entries are to be managed in a hashtable, which is "the metadata table".
* As a special case, an entry without a concrete buffer storage pointer
* can be created. This corresponds to a (plain) key and describes just
* a buffer type. Such type-only entries are fixed to the NIL state.
* All other entries allow for state transitions.
*
* The "metadata table" with its entries is maintained by an engine::BufferMetadata
* instance. For the latter, Entry serves as representation and access point
* to the individual metadata; this includes using the TypeHandler for
* building and destroying buffer structures.
*/
class Entry
: public Key
{
BufferState state_;
const void* buffer_;
void* buffer_;
protected:
Entry (Key const& parent, void* bufferPtr =0, LocalKey const& implID =UNSPECIFIC)
: Key (Key::forEntry (parent, bufferPtr, implID))
, state_(bufferPtr? LOCKED:NIL)
, buffer_(bufferPtr)
{ }
/// BufferMetadata is allowed to create
friend class engine::BufferMetadata;
// standard copy operations permitted
public:
virtual BufferState
/** is this Entry currently associated to a
* concrete buffer? Is this buffer in use? */
bool
isLocked() const
{
ASSERT (!buffer_ || (NIL != state_ && FREE != state_));
return bool(buffer_);
}
/** is this Entry just an (abstract) placeholder for a type?
* @return false if it's a real entry corresponding to a concrete buffer
*/
bool
isTypeKey() const
{
return NIL == state_ && !buffer_;
}
BufferState
state() const
{
__must_not_be_NIL();
return state_;
}
virtual const void*
access() const
void*
access()
{
__must_not_be_NIL();
__must_not_be_FREE();
@ -371,40 +321,83 @@ namespace engine {
return buffer_;
}
virtual Entry&
/** Buffer state machine */
Entry&
mark (BufferState newState)
{
switch (this->state_)
__must_not_be_NIL();
if ( (state_ == FREE && newState == LOCKED)
||(state_ == LOCKED && newState == EMITTED)
||(state_ == LOCKED && newState == BLOCKED)
||(state_ == LOCKED && newState == FREE)
||(state_ == EMITTED && newState == BLOCKED)
||(state_ == EMITTED && newState == FREE)
||(state_ == BLOCKED && newState == FREE))
{
case NIL: __must_not_be_NIL();
case FREE: __must_not_be_FREE();
case LOCKED:
if (newState == EMITTED) break; // allow transition
case EMITTED:
if (newState == BLOCKED) break; // allow transition
case BLOCKED:
if (newState == FREE) // note fall through for LOCKED and EMITTED too
{
buffer_ = 0;
break; // allow transition
}
default:
throw error::Fatal ("Invalid buffer state encountered.");
// allowed transition
if (newState == FREE)
invokeEmbeddedDtor_and_clear();
if (newState == LOCKED)
invokeEmbeddedCtor();
state_ = newState;
return *this;
}
state_ = newState;
throw error::Fatal ("Invalid buffer state transition.");
}
Entry&
lock (void* newBuffer)
{
__must_be_FREE();
buffer_ = newBuffer;
return mark (LOCKED);
}
Entry&
invalidate (bool invokeDtor =true)
{
if (buffer_ && invokeDtor)
invokeEmbeddedDtor_and_clear();
buffer_ = 0;
state_ = FREE;
return *this;
}
protected:
/** @internal maybe invoke a registered TypeHandler's
* constructor function, which typically builds some
* content object into the buffer by placement new. */
void
invokeEmbeddedCtor()
{
__buffer_required();
if (nontrivial (instanceFunc_))
instanceFunc_.createAttached (buffer_);
}
/** @internal maybe invoke a registered TypeHandler's
* destructor function, which typically clears up some
* content object living within the buffer */
void
invokeEmbeddedDtor_and_clear()
{
__buffer_required();
if (nontrivial (instanceFunc_))
instanceFunc_.destroyAttached (buffer_);
buffer_ = 0;
}
private:
void
__must_not_be_NIL() const
{
if (NIL == state_)
throw error::Fatal ("Concrete buffer entry with state==NIL encountered."
"State transition logic broken (programming error)");
throw error::Fatal ("Buffer metadata entry with state==NIL encountered."
"State transition logic broken (programming error)"
, LUMIERA_ERROR_LIFECYCLE);
}
void
@ -416,21 +409,144 @@ namespace engine {
"You should invoke markLocked(buffer) prior to access."
, LUMIERA_ERROR_LIFECYCLE );
}
void
__must_be_FREE() const
{
if (FREE != state_)
throw error::Logic ("Buffer already in use"
, LUMIERA_ERROR_LIFECYCLE );
REQUIRE (!buffer_, "Buffer marked as free, "
"but buffer pointer is set.");
}
void
__buffer_required() const
{
if (!buffer_)
throw error::Fatal ("Need concrete buffer for any further operations");
}
};
}
/**
* (Hash)Table to store and manage buffer metadata.
* Buffer metadata entries are comprised of a Key part and an extended
* Entry, holding the actual management and housekeeping metadata. The
* Keys are organised hierarchically and denote the "kind" of buffer.
* The hash values for lookup are based on the key part, chained with
* the actual memory location of the concrete buffer corresponding
* to the metadata entry to be retrieved.
*/
class Table
{
typedef std::tr1::unordered_map<HashVal,Entry> MetadataStore;
MetadataStore entries_;
public:
~Table() { verify_all_buffers_freed(); }
/** fetch metadata record, if any
* @param hashID for the Key part of the metadata entry
* @return pointer to the entry in the table or NULL
*/
Entry*
fetch (HashVal hashID)
{
MetadataStore::iterator pos = entries_.find (hashID);
if (pos != entries_.end())
return &(pos->second);
else
return NULL;
}
const Entry*
fetch (HashVal hashID) const
{
MetadataStore::const_iterator pos = entries_.find (hashID);
if (pos != entries_.end())
return &(pos->second);
else
return NULL;
}
/** store a copy of the given new metadata entry.
* The hash key for lookup is retrieved from the given Entry, by conversion to HashVal.
* Consequently, this will be the hashID of the parent Key (type), when the entry holds
* a NULL buffer (i.e a "pseudo entry"). Otherwise, it will be this parent Key hash,
* extended by hashing the actual buffer address.
* @return reference to relevant entry for this Key. This might be a copy
* of the new entry, or an already existing entry with the same Key
*/
Entry&
store (Entry const& newEntry)
{
using std::make_pair;
REQUIRE (!fetch (newEntry), "duplicate buffer metadata entry");
MetadataStore::iterator pos = entries_.insert (make_pair (HashVal(newEntry), newEntry))
.first;
ENSURE (pos != entries_.end());
return pos->second;
}
void
remove (HashVal hashID)
{
uint cnt = entries_.erase (hashID);
ENSURE (cnt, "entry to remove didn't exist");
}
private:
void
verify_all_buffers_freed()
try
{
for_each (entries_, verify_is_free);
}
ERROR_LOG_AND_IGNORE (engine,"Shutdown of BufferProvider metadata store")
static void
verify_is_free (std::pair<HashVal, Entry> const& e)
{
WARN_IF (e.second.isLocked(), engine,
"Buffer still in use while shutting down BufferProvider? ");
}
};
}//namespace metadata
class Metadata
/* ===== Buffer Metadata Frontend ===== */
/**
* Registry for managing buffer metadata.
* This is an implementation level service,
* used by the standard BufferProvider implementation.
* Each metadata registry (instance) defines and maintains
* a family of "buffer types"; beyond the buffer storage size,
* the concrete meaning of those types is tied to the corresponding
* BufferProvider implementation and remains opaque. These types are
* represented as hierarchically linked hash keys. The implementation
* may bind a TypeHandler to a specific type, allowing automatic invocation
* of a "constructor" and "destructor" function on each buffer of this type,
* when \em locking or \em freeing the corresponding buffer.
*/
class BufferMetadata
: boost::noncopyable
{
Literal id_;
HashVal family_;
public:
metadata::Table table_;
public:
typedef metadata::Key Key;
typedef metadata::Entry Entry;
@ -438,10 +554,10 @@ namespace engine {
* Such will maintain a family of buffer type entries
* and provide a service for storing and retrieving metadata
* for concrete buffer entries associated with these types.
* @param implementationID to distinguish families
* of type keys belonging to different registries.
* @param implementationID to distinguish families of
* type keys belonging to different registries.
*/
Metadata (Literal implementationID)
BufferMetadata (Literal implementationID)
: id_(implementationID)
, family_(hash_value(id_))
{ }
@ -459,20 +575,16 @@ namespace engine {
Key
key ( size_t storageSize
, TypeHandler instanceFunc =RAW_BUFFER
, LocalKey specifics =UNSPECIFIC)
, LocalKey specifics =UNSPECIFIC)
{
REQUIRE (storageSize);
Key typeKey = trackKey (family_, storageSize);
if (nontrivial(instanceFunc))
{
typeKey = trackKey (typeKey, instanceFunc);
}
if (nontrivial(specifics))
{
typeKey = trackKey (typeKey, specifics);
}
return typeKey;
}
@ -484,50 +596,159 @@ namespace engine {
return trackKey (parentKey, instanceFunc);
}
/** create a sub-type, using a different private-ID (implementation defined) */
/** create a sub-type,
* using a different private-ID (implementation defined) */
Key
key (Key const& parentKey, LocalKey specifics)
{
return trackKey (parentKey, specifics);
}
Key
key (Key const& parentKey, const void* concreteBuffer)
/** shortcut to access the Key part of a (probably new) Entry
* describing a concrete buffer at the given address
* @note might create/register a new Entry as a side-effect
*/
Key const&
key (Key const& parentKey, void* concreteBuffer, LocalKey const& implID =UNSPECIFIC)
{
UNIMPLEMENTED ("create sub-object key for concrete buffer");
Key derivedKey = Key::forEntry (parentKey, concreteBuffer);
Entry* existing = table_.fetch (derivedKey);
return existing? *existing
: markLocked (parentKey,concreteBuffer,implID);
}
Key const&
/** core operation to access or create a concrete buffer metadata entry.
* The hashID of the entry in question is built, based on the parentKey,
* which denotes a buffer type, and the concrete buffer address. If yet
* unknown, a new concrete buffer metadata Entry is created and initialised
* to LOCKED state. Otherwise just the existing Entry is fetched.
* @note this function really \em activates the buffer.
* In case the type (Key) involves a TypeHandler (functor),
* its constructor function will be invoked, if actually a new
* entry gets created. Typically this mechanism will be used
* to placement-create an object into the buffer.
* @param parentKey a key describing the \em type of the buffer
* @param concreteBuffer storage pointer, must not be NULL
* @param onlyNew disallow fetching an existing entry
* @throw error::Logic when #onlyNew is set, but an equivalent entry
* was registered previously. This indicates a serious error
* in buffer lifecycle management.
* @throw error::Invalid when invoked with NULL buffer. Use the #key
* functions instead to register and track type keys.
* @return reference to the entry stored in the metadata table.
* @warning the exposed reference might become invalid when the
* buffer is released or re-used later.
*/
Entry&
lock (Key const& parentKey
,void* concreteBuffer
,LocalKey const& implID =UNSPECIFIC
,bool onlyNew =false)
{
if (!concreteBuffer)
throw error::Invalid ("Attempt to lock a slot for a NULL buffer"
, error::LUMIERA_ERROR_BOTTOM_VALUE);
Entry newEntry(parentKey, concreteBuffer, implID);
Entry* existing = table_.fetch (newEntry);
if (existing && onlyNew)
throw error::Logic ("Attempt to lock a slot for a new buffer, "
"while actually the old buffer is still locked"
, error::LUMIERA_ERROR_LIFECYCLE );
if (existing && existing->isLocked())
throw error::Logic ("Attempt to re-lock a buffer still in use"
, error::LUMIERA_ERROR_LIFECYCLE );
if (!existing)
return store_and_lock (newEntry); // actual creation
else
return existing->lock (concreteBuffer);
}
/** access the metadata record registered with the given hash key.
* This might be a pseudo entry in case of a Key describing a buffer type.
* Otherwise, the entry associated with a concrete buffer pointer is returned
* by reference, an can be modified (e.g. state change)
* @param hashID which can be calculated from the Key
* @throw error::Invalid when there is no such entry
* @note use #isKnown to check existence
*/
Entry&
get (HashVal hashID)
{
UNIMPLEMENTED ("access the plain key entry");
}
Entry&
get (Key key)
{
UNIMPLEMENTED ("access, possibly create metadata records");
Entry* entry = table_.fetch (hashID);
if (!entry)
throw error::Invalid ("Attempt to access an unknown buffer metadata entry");
return *entry;
}
bool
isKnown (HashVal key) const
{
UNIMPLEMENTED ("diagnostics: known record?");
return bool(table_.fetch (key));
}
bool
isLocked (HashVal key) const
{
UNIMPLEMENTED ("diagnostics: actually locked buffer instance record?");
const Entry* entry = table_.fetch (key);
return entry
&& entry->isLocked();
}
/* == memory management == */
Entry& markLocked (Key const& parentKey, const void* buffer);
void release (HashVal key);
/* == memory management operations == */
private:
/** combine the type (Key) with a concrete buffer,
* thereby marking this buffer as locked. Store a concrete
* metadata Entry to account for this fact. This might include
* invoking a constructor function, in case the type (Key)
* defines a (nontrivial) TypeHandler.
* @throw error::Fatal when locking a NULL buffer
* @throw exceptions which might be raised by a TypeHandler's
* constructor function. In this case, the Entry remains
* created, but is marked as FREE
*/
Entry&
markLocked (Key const& parentKey, void* buffer, LocalKey const& implID =UNSPECIFIC)
{
if (!buffer)
throw error::Fatal ("Attempt to lock for a NULL buffer. Allocation floundered?"
, error::LUMIERA_ERROR_BOTTOM_VALUE);
return this->lock(parentKey, buffer, implID, true); // force creation of a new entry
}
/** purge the bare metadata Entry from the metadata tables.
* @throw error::Logic if the entry isn't marked FREE already
*/
void
release (HashVal key)
{
Entry* entry = table_.fetch (key);
if (!entry) return;
ASSERT (entry && (key == HashVal(*entry)));
release (*entry);
}
void
release (Entry const& entry)
{
if (FREE != entry.state())
throw error::Logic ("Attempt to release a buffer still in use"
, error::LUMIERA_ERROR_LIFECYCLE);
table_.remove (HashVal(entry));
}
private:
template<typename PAR, typename DEF>
Key
@ -542,43 +763,31 @@ namespace engine {
maybeStore (Key const& key)
{
if (isKnown (key)) return;
UNIMPLEMENTED ("registry for type keys");
table_.store (Entry (key, NULL));
}
Entry&
store_and_lock (Entry const& metadata)
{
Entry& newEntry = table_.store (metadata);
try
{
newEntry.invokeEmbeddedCtor();
ENSURE (LOCKED == newEntry.state());
ENSURE (newEntry.access());
}
catch(...)
{
newEntry.mark(FREE);
throw;
}
return newEntry;
}
};
/** */
inline Metadata::Entry&
Metadata::markLocked (Key const& parentKey, const void* buffer)
{
UNIMPLEMENTED ("transition to locked state");
if (!buffer)
throw error::Fatal ("Attempt to lock for a NULL buffer. Allocation floundered?"
, error::LUMIERA_ERROR_BOTTOM_VALUE);
Key newKey = this->key (parentKey, buffer);
if (isLocked(newKey))
throw error::Logic ("Attempt to lock a slot for a new buffer, "
"while actually the old buffer is still locked."
, error::LUMIERA_ERROR_LIFECYCLE );
return this->get(newKey);
}
inline void
Metadata::release (HashVal key)
{
UNIMPLEMENTED ("metadata memory management");
}
} // namespace engine
#endif

View file

@ -21,9 +21,12 @@
* *****************************************************/
#include "lib/error.hpp"
#include "proc/engine/buffer-provider.hpp"
#include "proc/engine/buffer-metadata.hpp"
#include "lib/util.hpp"
using util::isSameObject;
namespace engine {
@ -33,10 +36,17 @@ namespace engine {
const uint DEFAULT_DESCRIPTOR = 0;
}
LUMIERA_ERROR_DEFINE (BUFFER_MANAGEMENT, "Problem providing working buffers");
/** build a new provider instance, managing a family of buffers.
* The metadata of these buffers is organised hierarchically based on
* chained hash values, using the #implementationID as a seed.
* @param implementationID symbolic ID setting these family of buffers apart.
*/
BufferProvider::BufferProvider (Literal implementationID)
: meta_(new Metadata (implementationID))
: meta_(new BufferMetadata (implementationID))
{ }
BufferProvider::~BufferProvider() { }
@ -47,9 +57,9 @@ namespace engine {
* currently locked and usable by client code
*/
bool
BufferProvider::verifyValidity (BufferDescriptor const&)
BufferProvider::verifyValidity (BufferDescriptor const& bufferID) const
{
UNIMPLEMENTED ("BufferProvider basic and default implementation");
return meta_->isLocked (bufferID);
}
@ -58,8 +68,176 @@ namespace engine {
{
return BufferDescriptor (*this, meta_->key (storageSize));
}
BufferDescriptor
BufferProvider::getDescriptorFor(size_t storageSize, TypeHandler specialTreatment)
{
return BufferDescriptor (*this, meta_->key (storageSize, specialTreatment));
}
size_t
BufferProvider::getBufferSize (HashVal typeID) const
{
metadata::Key& typeKey = meta_->get (typeID);
return typeKey.storageSize();
}
/** callback from implementation to build and enrol a BufferHandle,
* to be returned to the client as result of the #lockBuffer call.
* Performs the necessary metadata state transition leading from an
* abstract buffer type to a metadata::Entry corresponding to an
* actual buffer, which is locked for exclusive use by one client.
*/
BuffHandle
BufferProvider::buildHandle (HashVal typeID, void* storage, LocalKey const& implID)
{
metadata::Key& typeKey = meta_->get (typeID);
metadata::Entry& entry = meta_->markLocked(typeKey, storage, implID);
return BuffHandle (BufferDescriptor(*this, entry), storage);
}
/** BufferProvider API: declare in advance the need for working buffers.
* This optional call allows client code to ensure the availability of the
* necessary working space, prior to starting the actual operations. The
* client may reasonably assume to get the actual number of buffers, as
* indicated by the return value. A provider may be able to handle
* various kinds of buffers (e.g. of differing size), which are
* distinguished by the \em type embodied into the BufferDescriptor.
* @return maximum number of simultaneously usable buffers of this type,
* to be retrieved later through calls to #lockBuffer.
* @throw error::State when no buffer of this kind can be provided
* @note the returned count may differ from the requested count.
*/
uint
BufferProvider::announce (uint count, BufferDescriptor const& type)
{
uint actually_possible = prepareBuffers (count, type);
if (!actually_possible)
throw error::State ("unable to fulfil request for buffers"
,LUMIERA_ERROR_BUFFER_MANAGEMENT);
return actually_possible;
}
/** BufferProvider API: retrieve a single buffer for exclusive use.
* This call actually claims a buffer of this type and marks it for
* use by client code. The returned handle allows for convenient access,
* but provides no automatic tracking or memory management. The client is
* explicitly responsible to invoke #releaseBuffer (which can be done directly
* on the BuffHandle).
* @return a copyable handle, representing this buffer and this usage transaction.
* @throw error::State when unable to provide this buffer
* @note this function may be used right away, without prior announcing, but then
* the client should be prepared for exceptions. The #announce operation allows
* to establish a reliably available baseline.
*/
BuffHandle
BufferProvider::lockBuffer (BufferDescriptor const& type)
{
REQUIRE (was_created_by_this_provider (type));
return provideLockedBuffer (type);
} // is expected to call buildHandle() --> state transition
/** BufferProvider API: state transition to \em emitted state.
* Client code may signal a state transition through this optional operation.
* The actual meaning of an "emitted" buffer is implementation defined; similarly,
* some back-ends may actually do something when emitting a buffer (e.g. commit data
* to cache), while others just set a flag or do nothing at all. This state transition
* may be invoked at most once per locked buffer.
* @throw error::Fatal in case of invalid state transition sequence. Only a locked buffer
* may be emitted, and at most once.
* @warning by convention, emitting a buffer implies that the contained data is ready and
* might be used by other parts of the application.
* An emitted buffer should not be modified anymore.
*/
void
BufferProvider::emitBuffer (BuffHandle const& handle)
{
metadata::Entry& metaEntry = meta_->get (handle.entryID());
mark_emitted (metaEntry.parentKey(), metaEntry.localKey());
metaEntry.mark(EMITTED);
}
/** BufferProvider API: declare done and detach.
* Client code is required to release \em each previously locked buffer eventually.
* @warning invalidates the BuffHandle, clients mustn't access the buffer anymore.
* Right after releasing, an access through the handle will throw;
* yet the buffer might be re-used and the handle become valid
* later on accidentally.
* @note EX_FREE
*/
void
BufferProvider::releaseBuffer (BuffHandle const& handle)
try {
metadata::Entry& metaEntry = meta_->get (handle.entryID());
metaEntry.mark(FREE); // might invoke embedded dtor function
detachBuffer (metaEntry.parentKey(), metaEntry.localKey());
meta_->release (metaEntry);
}
ERROR_LOG_AND_IGNORE (engine, "releasing a buffer from BufferProvider")
/** @warning this operation locally modifies the metadata entry of a single buffer
* to attach a TypeHandler taking ownership of an object embedded within the buffer.
* The client is responsible for actually placement-constructing the object; moreover
* the client is responsible for any damage done to already existing buffer content.
* @note the buffer must be in locked state and the underlying buffer type must not define
* an non-trivial TypeDescriptor, because there is no clean way of superseding an
* existing TypeDescriptor, which basically is just a functor and possibly
* could perform any operation on buffer clean-up.
* @note EX_STRONG
*/
void
BufferProvider::attachTypeHandler (BuffHandle const& target, BufferDescriptor const& reference)
{
metadata::Entry& metaEntry = meta_->get (target.entryID());
metadata::Entry& refEntry = meta_->get (reference);
REQUIRE (refEntry.isTypeKey());
REQUIRE (!metaEntry.isTypeKey());
if (!metaEntry.isLocked())
throw error::Logic ("unable to attach an object because buffer isn't locked for use"
, LUMIERA_ERROR_LIFECYCLE);
metaEntry.useTypeHandlerFrom (refEntry); // EX_STRONG
}
/** @internal abort normal lifecycle, reset the underlying buffer and detach from it.
* This allows to break out of normal usage and reset the handle to \em invalid state
* @param invokeDtor if possibly the clean-up function of an TypeHandler registered with
* the buffer metadata should be invoked prior to resetting the metadata state.
* Default is \em not to invoke anything
* @note EX_FREE
*/
void
BufferProvider::emergencyCleanup (BuffHandle const& target, bool invokeDtor)
try {
metadata::Entry& metaEntry = meta_->get (target.entryID());
metaEntry.invalidate (invokeDtor);
detachBuffer (metaEntry.parentKey(), metaEntry.localKey());
meta_->release (metaEntry);
}
ERROR_LOG_AND_IGNORE (engine, "cleanup of buffer metadata while handling an error")
bool
BufferProvider::was_created_by_this_provider (BufferDescriptor const& descr) const
{
return isSameObject (*this, *descr.provider_);
}
/* === BufferDescriptor and BuffHandle === */
@ -71,12 +249,67 @@ namespace engine {
}
size_t
BufferDescriptor::determineBufferSize() const
{
return provider_->getBufferSize (*this);
}
void
BuffHandle::emit()
{
REQUIRE (isValid());
descriptor_.provider_->emitBuffer(*this);
}
void
BuffHandle::release()
{
UNIMPLEMENTED ("forward buffer release call to buffer provider");
if (pBuffer_)
{
REQUIRE (isValid());
descriptor_.provider_->releaseBuffer(*this);
pBuffer_ = 0;
}
ENSURE (!isValid());
}
void
BuffHandle::emergencyCleanup()
{
descriptor_.provider_->emergencyCleanup(*this); // EX_FREE
pBuffer_ = 0;
}
/** Install a standard TypeHandler for an already locked buffer.
* This causes the dtor function to be invoked when releasing this buffer.
* The assumption is that client code will placement-construct an object
* into this buffer right away, and thus we're taking ownership on that object.
* @param type a reference BufferDescriptor defining an embedded TypeHandler to use
* A copy of this TypeHandler will be stored into the local metadata for
* this buffer only, not altering the basic buffer type in any way
* @throw lifecycle error when attempting to treat an buffer not in locked state
* @throw error::Logic in case of insufficient buffer space to hold the
* intended target object
* @note EX_STRONG
*/
void
BuffHandle::takeOwnershipFor(BufferDescriptor const& type)
{
if (!this->isValid())
throw error::Logic ("attaching an object requires an buffer in locked state"
, LUMIERA_ERROR_LIFECYCLE);
if (this->size() < type.determineBufferSize())
throw error::Logic ("insufficient buffer size to hold an instance of that type");
descriptor_.provider_->attachTypeHandler(*this, type); // EX_STRONG
}
} // namespace engine

View file

@ -22,7 +22,7 @@
/** @file buffer-provider.hpp
** Abstraction to represent buffer management and lifecycle within the render engine.
** It turns out that -- throughout the render engine implementation -- we never need
** It turns out that -- throughout the render engine implementation -- we never need
** direct access to the buffers holding media data. Buffers are just some entity to be \em managed,
** i.e. "allocated", "locked" and "released"; the actual meaning of these operations is an implementation detail.
** The code within the render engine just pushes around BufferHandle objects, which act as a front-end,
@ -45,6 +45,8 @@
#include "lib/error.hpp"
#include "lib/symbol.hpp"
#include "proc/engine/buffhandle.hpp"
#include "proc/engine/type-handler.hpp"
#include "proc/engine/buffer-local-key.hpp"
#include <boost/noncopyable.hpp>
#include <boost/scoped_ptr.hpp>
@ -56,7 +58,10 @@ namespace engine {
using lib::Literal;
class Metadata;
class BufferMetadata;
LUMIERA_ERROR_DECLARE (BUFFER_MANAGEMENT); ///< Problem providing working buffers
/**
@ -66,46 +71,70 @@ namespace engine {
* - "locking" a buffer to yield a buffer handle
* - dereferencing this smart-handle class
*
* @warning all of BufferProvider is assumed to run within a threadsafe environment.
*
* @todo as of 6/2011 buffer management within the engine is still a bit vague
* @todo as of 11/11 thread safety within the engine remains to be clarified
*/
class BufferProvider
: boost::noncopyable
{
scoped_ptr<Metadata> meta_;
scoped_ptr<BufferMetadata> meta_;
protected: /* === for Implementation by concrete providers === */
protected:
BufferProvider (Literal implementationID);
virtual uint prepareBuffers (uint count, HashVal typeID) =0;
virtual BuffHandle provideLockedBuffer (HashVal typeID) =0;
virtual void mark_emitted (HashVal typeID, LocalKey const&) =0;
virtual void detachBuffer (HashVal typeID, LocalKey const&) =0;
public:
virtual ~BufferProvider(); ///< this is an interface
virtual ~BufferProvider(); ///< this is an ABC
virtual uint announce (uint count, BufferDescriptor const&) =0;
uint announce (uint count, BufferDescriptor const&);
virtual BuffHandle lockBufferFor (BufferDescriptor const&) =0;
virtual void releaseBuffer (BuffHandle const&) =0;
BuffHandle lockBuffer (BufferDescriptor const&);
void emitBuffer (BuffHandle const&);
void releaseBuffer (BuffHandle const&);
template<typename BU>
BuffHandle lockBufferFor ();
/** allow for attaching and owing an object within an already created buffer */
void attachTypeHandler (BuffHandle const& target, BufferDescriptor const& reference);
void emergencyCleanup (BuffHandle const& target, bool invokeDtor =false);
/** describe the kind of buffer managed by this provider */
BufferDescriptor getDescriptorFor(size_t storageSize=0);
BufferDescriptor getDescriptorFor(size_t storageSize, TypeHandler specialTreatment);
template<typename BU>
BufferDescriptor getDescriptor();
/* === API for BuffHandle internal access === */
bool verifyValidity (BufferDescriptor const&);
bool verifyValidity (BufferDescriptor const&) const;
size_t getBufferSize (HashVal typeID) const;
protected:
BuffHandle buildHandle (HashVal typeID, void* storage, LocalKey const&);
bool was_created_by_this_provider (BufferDescriptor const&) const;
};
/* === Implementation === */
/** convenience shortcut:
@ -119,15 +148,20 @@ namespace engine {
BuffHandle
BufferProvider::lockBufferFor()
{
UNIMPLEMENTED ("convenience shortcut to announce and lock for a specific object type");
BufferDescriptor attach_object_automatically = getDescriptor<BU>();
return lockBuffer (attach_object_automatically);
}
/** define a "buffer type" for automatically creating
* an instance of the template type embedded into the buffer
* and destroying that embedded object when releasing the buffer.
*/
template<typename BU>
BufferDescriptor
BufferProvider::getDescriptor()
{
UNIMPLEMENTED ("build descriptor for automatically placing an object instance into the buffer");
return getDescriptorFor (sizeof(BU), TypeHandler::create<BU>());
}

View file

@ -0,0 +1,129 @@
/*
BUFFHANDLE-ATTACH.hpp - Buffer handle extension to attach objects into the buffer
Copyright (C) Lumiera.org
2008, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/** @file buffhandle-attach.hpp
** Extension to allow placing objects right into the buffers, taking ownership.
** This extension is mostly helpful for writing unit-tests, and beyond that for the
** rather unusual case where we need to place an full-blown object into the buffer,
** instead of just plain data. A possible use case for this mechanism is to allow for
** state pre calculation stream, feeding this local state to the individual render node
** embedded into a "state frame". Some effect processors indeed need to maintain state
** beyond the single frame (e.g. averaging, integrating, sound compression), which usually
** is handled by applying an "instance" of that processor to the frames to be calculated
** in a straight sequence.
**
** BuffHandle and the underlying BufferProvider standard implementation support that case
** by attaching an object managing functor to the metadata. This way, the state can live
** directly embedded into the frame and still be accessed like an object. To keep the
** header and compilation footprint low, the implementation of the functions supporting
** this special case was split out of the basic buffhandle.hpp
**
** @see BuffHandle
** @see BufferProviderProtocol_test usage demonstration
*/
#ifndef ENGINE_BUFFHANDLE_ATTACH_H
#define ENGINE_BUFFHANDLE_ATTACH_H
#include "lib/error.hpp"
#include "proc/engine/buffer-provider.hpp"
#include "proc/engine/buffhandle.hpp"
namespace engine {
/* === BuffHandle Implementation === */
#define _EXCEPTION_SAFE_INVOKE(_CTOR_) \
try \
{ \
return *new(pBuffer_) _CTOR_; \
} \
catch(...) \
{ \
emergencyCleanup(); /* EX_FREE */ \
pBuffer_ = 0; \
throw; \
}
/** convenience shortcut: place and maintain an object within the buffer.
* This operation performs the necessary steps to attach an object;
* if the buffer isn't locked yet, it will do so. Moreover, the created
* object will be owned by the buffer management facilities, i.e. the
* destructor is registered as cleanup function.
* @throw error::Logic in case there is already another TypeHandler registered
* in charge of managing the buffer contents, or when the object to create
* would not fit into this buffer.
*/
template<typename BU>
inline BU&
BuffHandle::create()
{
takeOwnershipFor<BU>();
_EXCEPTION_SAFE_INVOKE (BU());
}
#undef _EXCEPTION_SAFE_INVOKE
/** @internal helper to attach an TypeHandler after-the fact.
* @note this prepares the buffer for placement-creating an embedded object.
* It doesn't actually create an object
* @throw error::Logic in case there is already another TypeHandler registered
* in charge of managing the buffer contents, or when the object to create
* would not fit into this buffer.
*/
template<typename BU>
inline void
BuffHandle::takeOwnershipFor()
{
BufferDescriptor howto_attach_object_automatically
= descriptor_.provider_->getDescriptor<BU>();
takeOwnershipFor (howto_attach_object_automatically); // EX_STRONG
}
/** convenience shortcut: access the buffer contents casted to a specific type.
* @warning this is a \em blind cast, there is no type safety.
* @note clients can utilise the metadata::LocalKey to keep track of some
* specific property of the buffer, like e.g. the type of object.
*/
template<typename BU>
inline BU&
BuffHandle::accessAs()
{
if (!pBuffer_)
throw error::Logic ("buffer not (yet) locked for access by clients"
, LUMIERA_ERROR_LIFECYCLE);
return *reinterpret_cast<BU*> (pBuffer_);
}
} // namespace engine
#endif

View file

@ -21,9 +21,9 @@
*/
/** @file buffhandle.hpp
** Various bits needed to support the buffer management within the render nodes.
** A front-end to support the buffer management within the render nodes.
** When pulling data from predecessor nodes and calculating new data, each render node
** needs several input and output buffers. These may be allocated and provided by several
** needs several input and output buffers. These may be allocated and provided by various
** different "buffer providers" (for example the frame cache). Typically, the real buffers
** will be passed as parameters to the actual job instance when scheduled, drawing on the
** results of prerequisite jobs. Yet the actual job implementation remains agnostic with
@ -31,13 +31,19 @@
** objects around. The actual render function gets an array of C-pointers to the actual
** buffers, and for accessing those buffers, the node needs to keep a table of buffer
** pointers, and for releasing the buffers later on, we utilise the buffer handles.
** The usage pattern of those buffer pointer tables is stack-like, thus the actual
** implementation utilises a single large buffer pointer array per pull() call
** sequence and dynamically claims small chunks for each node.
**
** @see nodewiring-def.hpp
** @see nodeoperation.hpp
** @see bufftable.hpp storage for the buffer table
** These buffer handles are based on a buffer descriptor record, which is opaque as far
** as the client is concerned. BufferDescriptor acts as a representation of the type or
** kind of buffer. The only way to obtain such a BufferDescriptor is from a concrete
** BufferProvider implementation. A back-link to this owning and managing provider is
** embedded into the BufferDescriptor, allowing to retrieve an buffer handle, corresponding
** to an actual buffer provided and managed behind the scenes. There is no automatic
** resource management; clients are responsible to invoke BuffHandle#release when done.
**
** @see BufferProvider
** @see BufferProviderProtocol_test usage demonstration
** @see OutputSlot
** @see bufftable.hpp storage for the buffer table
** @see engine::RenderInvocation
*/
@ -52,6 +58,12 @@
namespace engine {
namespace error = lumiera::error;
using error::LUMIERA_ERROR_LIFECYCLE;
typedef size_t HashVal; ////////////TICKET #722
class BuffHandle;
class BufferProvider;
@ -64,54 +76,36 @@ namespace engine {
* @note this descriptor and especially the #subClassification_ is really owned
* by the BufferProvider, which may use (and even change) the opaque contents
* to organise the internal buffer management.
*
* @todo try to move that definition into buffer-provider.hpp ////////////////////////////////////TICKET #249
*/
class BufferDescriptor
{
protected:
BufferProvider* provider_;
uint64_t subClassification_;
HashVal subClassification_;
BufferDescriptor(BufferProvider& manager, uint64_t detail)
BufferDescriptor(BufferProvider& manager, HashVal detail)
: provider_(&manager)
, subClassification_(detail)
{ }
friend class BufferProvider;
friend class BuffHandle;
public:
// using standard copy operations
bool verifyValidity() const;
size_t determineBufferSize() const;
operator HashVal() const { return subClassification_; }
};
class ProcNode;
typedef ProcNode* PNode;
struct ChannelDescriptor ///////TODO really need to define that here? it is needed for node wiring only
{
const lumiera::StreamType * bufferType; /////////////////////////////////////////TICKET #828
};
struct InChanDescriptor : ChannelDescriptor
{
PNode dataSrc; ///< the ProcNode to pull this input from
uint srcChannel; ///< output channel to use on the predecessor node
};
/**
* Handle for a buffer for processing data, abstracting away the actual implementation.
* The real buffer pointer can be retrieved by dereferencing this smart-handle class.
*
* @todo as of 6/2011 it isn't clear how buffer handles are actually created
* and how the lifecycle (and memory) management works //////////////////////TICKET #249 rework BuffHandle creation and usage
*/
class BuffHandle
: public lib::BoolCheckable<BuffHandle>
@ -127,15 +121,16 @@ namespace engine {
/** @internal a buffer handle may be obtained by "locking"
* a buffer from the corresponding BufferProvider */
BuffHandle(BufferDescriptor const& typeInfo, PBuff storage = 0)
BuffHandle(BufferDescriptor const& typeInfo, void* storage = 0)
: descriptor_(typeInfo)
, pBuffer_(storage)
, pBuffer_(static_cast<PBuff>(storage))
{ }
// using standard copy operations
void emit();
void release();
@ -146,6 +141,8 @@ namespace engine {
BU& accessAs();
//////////////////////////////////////////TICKET #249 this operator looks obsolete. The Buff type is a placeholder type,
//////////////////////////////////////////TODO it should never be accessed directly from within Lumiera engine code
Buff&
operator* () const
{
@ -160,42 +157,27 @@ namespace engine {
&& descriptor_.verifyValidity();
}
HashVal
entryID() const
{
return HashVal(descriptor_);
}
size_t
size() const
{
UNIMPLEMENTED ("forward to the buffer provider for storage size diagnostics");
return descriptor_.determineBufferSize();
}
private:
template<typename BU>
void takeOwnershipFor();
void takeOwnershipFor(BufferDescriptor const& type);
void emergencyCleanup();
};
/* === Implementation details === */
/** convenience shortcut: place and maintain an object within the buffer.
* This operation performs the necessary steps to attach an object;
* if the buffer isn't locked yet, it will do so. Moreover, the created
* object will be owned by the buffer management facilities, i.e. the
* destructor is registered as cleanup function.
*/
template<typename BU>
BU&
BuffHandle::create()
{
UNIMPLEMENTED ("convenience shortcut to attach/place an object in one sway");
}
/** convenience shortcut: access the buffer contents in a typesafe fashion.
* This is equivalent to a plain dereferentiation with additional metadata check
* @throw error::Logic in case of type mismatch \c LUMIERA_ERROR_WRONG_TYPE
*/
template<typename BU>
BU&
BuffHandle::accessAs()
{
UNIMPLEMENTED ("convenience shortcut to access buffer contents typesafe");
}
} // namespace engine

View file

@ -26,7 +26,7 @@
#include "lib/error.hpp"
#include "proc/engine/buffhandle.hpp"
#include "proc/engine/channel-descriptor.hpp"
#include "proc/engine/procnode.hpp"
#include <boost/noncopyable.hpp>
@ -70,6 +70,7 @@ namespace engine {
PBu inBuff;
};
class BufferDescriptor;
/** Obsolete, to be rewritten /////TICKET #826 */
class BuffTableStorage

View file

@ -0,0 +1,75 @@
/*
CHANNEL-DESCRIPTOR.hpp - Channel / Buffer type representation for the engine
Copyright (C) Lumiera.org
2008, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/** @file channel-descriptor.hpp
** Representation of the Media type of a data channel used within the engine.
**
** @todo as it stands (11/2011) this file is obsoleted and needs to be refactored,
** alongside with adapting the node invocation to the new BufferProvider interface.
**
** @see nodewiring-def.hpp
** @see nodeoperation.hpp
** @see bufftable-obsolete.hpp storage for the buffer table
** @see engine::RenderInvocation
*/
#ifndef ENGINE_CHANNEL_DESCRIPTOR_H
#define ENGINE_CHANNEL_DESCRIPTOR_H
#include "lib/error.hpp"
#include "lib/streamtype.hpp"
#include "lib/bool-checkable.hpp"
namespace engine {
namespace error = lumiera::error;
using error::LUMIERA_ERROR_LIFECYCLE;
typedef size_t HashVal; ////////////TICKET #722
class BuffHandle;
class BufferProvider;
class BufferDescriptor;
class ProcNode;
typedef ProcNode* PNode;
struct ChannelDescriptor ///////TODO really need to define that here? it is needed for node wiring only
{
const lumiera::StreamType * bufferType; /////////////////////////////////////////TICKET #828
};
struct InChanDescriptor : ChannelDescriptor
{
PNode dataSrc; ///< the ProcNode to pull this input from
uint srcChannel; ///< output channel to use on the predecessor node
};
} // namespace engine
#endif

View file

@ -22,18 +22,10 @@
#include "lib/error.hpp"
#include "include/logging.h"
#include "lib/meta/function.hpp"
#include "lib/scoped-ptrvect.hpp"
#include "proc/engine/diagnostic-buffer-provider.hpp"
#include <boost/scoped_array.hpp>
//#include <vector>
using lib::ScopedPtrVect;
using boost::scoped_array;
#include "proc/engine/tracking-heap-block-provider.hpp"
namespace engine {
@ -43,115 +35,10 @@ namespace engine {
lib::Singleton<DiagnosticBufferProvider> DiagnosticBufferProvider::diagnostics;
class Block
: boost::noncopyable
{
size_t size_;
scoped_array<char> storage_;
bool was_locked_;
public:
Block()
: size_(0)
, storage_()
, was_locked_(false)
{ }
bool
was_used() const
{
return was_locked_;
}
bool
was_closed() const
{
return was_locked_;
}
void*
accessMemory() const
{
return storage_.get();
}
};
namespace { // Details of allocation and accounting
const uint MAX_BUFFERS = 50;
} // (END) Details of allocation and accounting
/**
* @internal DiagnosticBufferProvider's PImpl.
* Uses a linearly growing table of heap allocated buffer blocks,
* which will never be discarded, unless the PImpl is discarded as a whole.
* This way, the tracked usage information remains available after the fact.
*/
class DiagnosticBufferProvider::HeapMemProvider
: public BufferProvider
, public ScopedPtrVect<Block>
{
virtual uint
announce (uint count, BufferDescriptor const& type)
{
UNIMPLEMENTED ("pre-register storage for buffers of a specific kind");
}
virtual BuffHandle
lockBufferFor (BufferDescriptor const& descriptor)
{
UNIMPLEMENTED ("lock buffer for exclusive use");
}
virtual void
releaseBuffer (BuffHandle const& handle)
{
UNIMPLEMENTED ("release a buffer and invalidate the handle");
}
public:
HeapMemProvider()
: BufferProvider ("Diagnostic_HeapAllocated")
{ }
virtual ~HeapMemProvider()
{
INFO (proc_mem, "discarding %zu diagnostic buffer entries", HeapMemProvider::size());
}
Block&
access_or_create (uint bufferID)
{
while (!withinStorageSize (bufferID))
manage (new Block);
ENSURE (withinStorageSize (bufferID));
return (*this)[bufferID];
}
private:
bool
withinStorageSize (uint bufferID) const
{
if (bufferID >= MAX_BUFFERS)
throw error::Fatal ("hardwired internal limit for test buffers exceeded");
return bufferID < size();
}
};
DiagnosticBufferProvider::DiagnosticBufferProvider()
: pImpl_() //////////TODO create PImpl here
: pImpl_()
{ }
DiagnosticBufferProvider::~DiagnosticBufferProvider() { }
@ -174,13 +61,13 @@ namespace engine {
return diagnostics();
}
DiagnosticBufferProvider::HeapMemProvider&
TrackingHeapBlockProvider&
DiagnosticBufferProvider::reset()
{
pImpl_.reset(new HeapMemProvider());
pImpl_.reset(new TrackingHeapBlockProvider());
return *pImpl_;
}
@ -189,9 +76,9 @@ namespace engine {
{
return &implInstance == pImpl_.get();
}
/* === diagnostic API === */
@ -199,21 +86,21 @@ namespace engine {
bool
DiagnosticBufferProvider::buffer_was_used (uint bufferID) const
{
return pImpl_->access_or_create(bufferID).was_used();
return pImpl_->access_emitted(bufferID).was_used();
}
bool
DiagnosticBufferProvider::buffer_was_closed (uint bufferID) const
{
return pImpl_->access_or_create(bufferID).was_closed();
return pImpl_->access_emitted(bufferID).was_closed();
}
void*
DiagnosticBufferProvider::accessMemory (uint bufferID) const
{
return pImpl_->access_or_create(bufferID).accessMemory();
return pImpl_->access_emitted(bufferID).accessMemory();
}

View file

@ -21,7 +21,7 @@
*/
/** @file diagnostic-buffer-provider.hpp
** An facility for writing unit-tests targetting the BufferProvider interface.
** An facility for writing unit-tests targeting the BufferProvider interface.
**
** @see buffer-provider-protocol-test.cpp
*/
@ -33,6 +33,7 @@
#include "lib/error.hpp"
#include "lib/singleton.hpp"
#include "lib/util.hpp"
#include "proc/engine/type-handler.hpp"
#include "proc/engine/buffer-provider.hpp"
#include <boost/scoped_ptr.hpp>
@ -44,6 +45,13 @@ namespace engine {
namespace error = lumiera::error;
/**
* simple BufferProvider implementation
* with additional allocation tracking
*/
class TrackingHeapBlockProvider;
/********************************************************************
* Helper for unit tests: Buffer provider reference implementation.
*
@ -53,18 +61,11 @@ namespace engine {
: boost::noncopyable
{
/**
* simple BufferProvider implementation
* with additional allocation tracking
*/
class HeapMemProvider;
boost::scoped_ptr<HeapMemProvider> pImpl_;
boost::scoped_ptr<TrackingHeapBlockProvider> pImpl_;
static lib::Singleton<DiagnosticBufferProvider> diagnostics;
HeapMemProvider& reset();
TrackingHeapBlockProvider& reset();
bool isCurrent (BufferProvider const&);
@ -98,22 +99,6 @@ namespace engine {
bool all_buffers_released() const;
template<typename BU>
bool
object_was_attached (uint bufferID) const
{
UNIMPLEMENTED ("verify object attachment status of a specific buffer");
}
template<typename BU>
bool
object_was_destroyed (uint bufferID) const
{
UNIMPLEMENTED ("verify object attachment status of a specific buffer");
}
private:

View file

@ -61,7 +61,7 @@ namespace engine{
CalcStream
EngineService::calculate(ModelPort mPort,
Timings nominalTimings,
OutputConnection output,
OutputConnection& output,
Quality serviceQuality)
{
UNIMPLEMENTED ("build a standard calculation stream");

View file

@ -141,7 +141,7 @@ namespace engine{
CalcStream
calculate(ModelPort mPort,
Timings nominalTimings,
OutputConnection output,
OutputConnection& output,
Quality serviceQuality =QoS_DEFAULT);
CalcStream

View file

@ -55,7 +55,7 @@
#include "proc/state.hpp"
#include "proc/engine/procnode.hpp"
#include "proc/engine/buffhandle.hpp"
#include "proc/engine/channel-descriptor.hpp"
#include "proc/engine/bufftable-obsolete.hpp"

View file

@ -59,7 +59,7 @@
#include "proc/state.hpp"
#include "proc/engine/procnode.hpp"
#include "proc/engine/buffhandle.hpp"
#include "proc/engine/channel-descriptor.hpp"
#include "proc/engine/bufftable-obsolete.hpp"
#include "proc/engine/nodeinvocation.hpp"

View file

@ -46,6 +46,7 @@
#include "proc/state.hpp"
#include "proc/asset/proc.hpp"
#include "proc/mobject/parameter.hpp"
#include "proc/engine/channel-descriptor.hpp"
#include "lib/frameid.hpp"
#include "lib/ref-array.hpp"

View file

@ -0,0 +1,325 @@
/*
TrackingHeapBlockProvider - plain heap allocating BufferProvider implementation for tests
Copyright (C) Lumiera.org
2011, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* *****************************************************/
#include "lib/error.hpp"
#include "include/logging.h"
//#include "lib/meta/function.hpp"
#include "lib/bool-checkable.hpp"
#include "lib/scoped-ptrvect.hpp"
#include "lib/scoped-holder.hpp"
#include "lib/util-foreach.hpp"
#include "proc/engine/tracking-heap-block-provider.hpp"
#include <boost/noncopyable.hpp>
#include <algorithm>
#include <vector>
//using util::for_each;
using util::and_all;
using std::vector;
using lib::ScopedHolder;
using lib::ScopedPtrVect;
namespace engine {
namespace error = lumiera::error;
namespace { // implementation helpers...
using diagn::Block;
/** helper to find Block entries
* based on their raw memory address */
inline bool
identifyBlock (Block const& inQuestion, void* storage)
{
return storage == &inQuestion;
}
/** build a searching predicate */
inline function<bool(Block const&)>
search_for_block_using_this_storage (void* storage)
{
return bind (identifyBlock, _1, storage);
}
template<class VEC>
inline Block*
pick_Block_by_storage (VEC& vec, void* blockLocation)
{
typename VEC::iterator pos
= std::find_if (vec.begin(),vec.end()
,search_for_block_using_this_storage(blockLocation));
if (pos!=vec.end())
return &(*pos);
else
return NULL;
}
}
namespace diagn {
typedef ScopedPtrVect<Block> PoolVec;
typedef ScopedHolder<PoolVec> PoolHolder;
/**
* @internal Pool of allocated buffer Blocks of a specific size.
* Helper for implementing a Diagnostic BufferProvider; actually does
* just heap allocations for the Blocks, but keeps a collection of
* allocated Blocks around. Individual entries can be retrieved
* and thus removed from the responsibility of BlockPool.
*
* The idea is that each buffer starts its lifecycle within some pool
* and later gets "emitted" to an output sequence, where it remains for
* later investigation and diagnostics.
*/
class BlockPool
: public lib::BoolCheckable<BlockPool>
{
uint maxAllocCount_;
size_t memBlockSize_;
PoolHolder blockList_;
public:
BlockPool()
: maxAllocCount_(0) // unlimited by default
, memBlockSize_(0)
, blockList_()
{ }
void
initialise (size_t blockSize)
{
blockList_.create();
memBlockSize_ = blockSize;
}
// standard copy operations are valid, but will
// raise an runtime error, once BlockPool is initialised.
~BlockPool()
{
if (!verify_all_children_idle())
ERROR (test, "Block actively in use while shutting down BufferProvider "
"allocation pool. This might lead to Segfault and memory leaks.");
}
uint
prepare_for (uint number_of_expected_buffers)
{
if (maxAllocCount_ &&
maxAllocCount_ < blockList_->size() + number_of_expected_buffers)
{
ASSERT (maxAllocCount_ >= blockList_->size());
return maxAllocCount_ - blockList_->size();
}
// currently no hard limit imposed
return number_of_expected_buffers;
}
Block&
createBlock()
{
return blockList_->manage (new Block(memBlockSize_));
}
Block*
find (void* blockLocation)
{
return pick_Block_by_storage (*blockList_, blockLocation);
}
Block*
transferResponsibility (Block* allocatedBlock)
{
return blockList_->detach (allocatedBlock);
}
size_t
size() const
{
return blockList_->size();
}
bool
isValid() const
{
return blockList_;
}
private:
bool
verify_all_children_idle()
{
try {
if (blockList_)
return and_all (*blockList_, is_in_sane_state);
}
ERROR_LOG_AND_IGNORE (test, "State verification of diagnostic BufferProvider allocation pool");
return true;
}
static bool
is_in_sane_state (Block const& block)
{
return !block.was_used()
|| block.was_closed();
}
};
}
namespace { // Details of allocation and accounting
const uint MAX_BUFFERS = 50;
diagn::Block emptyPlaceholder(0);
} // (END) Details of allocation and accounting
/**
* @internal create a memory tracking BufferProvider,
*/
TrackingHeapBlockProvider::TrackingHeapBlockProvider()
: BufferProvider ("Diagnostic_HeapAllocated")
, pool_(new diagn::PoolTable)
, outSeq_()
{ }
TrackingHeapBlockProvider::~TrackingHeapBlockProvider()
{
INFO (proc_mem, "discarding %zu diagnostic buffer entries", outSeq_.size());
}
/* ==== Implementation of the BufferProvider interface ==== */
uint
TrackingHeapBlockProvider::prepareBuffers(uint requestedAmount, HashVal typeID)
{
diagn::BlockPool& responsiblePool = getBlockPoolFor (typeID);
return responsiblePool.prepare_for (requestedAmount);
}
BuffHandle
TrackingHeapBlockProvider::provideLockedBuffer(HashVal typeID)
{
diagn::BlockPool& blocks = getBlockPoolFor (typeID);
diagn::Block& newBlock = blocks.createBlock();
return buildHandle (typeID, newBlock.accessMemory(), &newBlock);
}
void
TrackingHeapBlockProvider::mark_emitted (HashVal typeID, LocalKey const& implID)
{
diagn::Block* block4buffer = locateBlock (typeID, implID);
if (!block4buffer)
throw error::Logic ("Attempt to emit a buffer not known to this BufferProvider"
, LUMIERA_ERROR_BUFFER_MANAGEMENT);
diagn::BlockPool& pool = getBlockPoolFor (typeID);
outSeq_.manage (pool.transferResponsibility (block4buffer));
}
/** mark a buffer as officially discarded */
void
TrackingHeapBlockProvider::detachBuffer (HashVal typeID, LocalKey const& implID)
{
diagn::Block* block4buffer = locateBlock (typeID, implID);
REQUIRE (block4buffer, "releasing a buffer not allocated through this provider");
block4buffer->markReleased();
}
/* ==== Implementation details ==== */
size_t
TrackingHeapBlockProvider::emittedCnt() const
{
return outSeq_.size();
}
diagn::Block&
TrackingHeapBlockProvider::access_emitted (uint bufferID)
{
if (!withinOutputSequence (bufferID))
return emptyPlaceholder;
else
return outSeq_[bufferID];
}
bool
TrackingHeapBlockProvider::withinOutputSequence (uint bufferID) const
{
if (bufferID >= MAX_BUFFERS)
throw error::Fatal ("hardwired internal limit for test buffers exceeded");
return bufferID < outSeq_.size();
}
diagn::BlockPool&
TrackingHeapBlockProvider::getBlockPoolFor (HashVal typeID)
{
diagn::BlockPool& pool = (*pool_)[typeID];
if (!pool)
pool.initialise(getBufferSize(typeID));
return pool;
}
diagn::Block*
TrackingHeapBlockProvider::locateBlock (HashVal typeID, void* storage)
{
diagn::BlockPool& pool = getBlockPoolFor (typeID);
diagn::Block* block4buffer = pool.find (storage);
return block4buffer? block4buffer
: searchInOutSeqeuence (storage);
}
diagn::Block*
TrackingHeapBlockProvider::searchInOutSeqeuence (void* blockLocation)
{
return pick_Block_by_storage (outSeq_, blockLocation);
}
} // namespace engine

View file

@ -0,0 +1,187 @@
/*
TRACKING-HEAP-BLOCK-PROVIDER.hpp - plain heap allocating BufferProvider implementation for tests
Copyright (C) Lumiera.org
2011, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/** @file tracking-heap-block-provider.hpp
** Dummy implementation of the BufferProvider interface to support writing unit tests.
** This BufferProvider is especially straight forward and brain dead: it just claims
** more and more heap blocks and never releases any memory dynamically. This allows
** to investigate additional tracking status flags for each allocated block after
** the fact.
**
** The allocated buffers are numbered with a simple ascending sequence of integers,
** used as LocalKey (see BufferMetadata). Clients can just request a Buffer with the
** given number, causing that block to be allocated. There is a "backdoor", allowing
** to access any allocated block, even if it is considered "released" by the terms
** of the usual lifecycle. Only when the provider object itself gets destroyed,
** all allocated blocks will be discarded.
**
** @see DiagnosticOutputSlot
** @see DiagnosticBufferProvider
** @see buffer-provider-protocol-test.cpp
*/
#ifndef PROC_ENGINE_TRACKING_HEAP_BLOCK_PROVIDER_H
#define PROC_ENGINE_TRACKING_HEAP_BLOCK_PROVIDER_H
#include "lib/error.hpp"
#include "proc/engine/buffer-provider.hpp"
#include "lib/scoped-ptrvect.hpp"
#include <tr1/unordered_map>
#include <boost/scoped_ptr.hpp>
#include <boost/scoped_array.hpp>
namespace engine {
namespace error = lumiera::error;
using lib::ScopedPtrVect;
namespace diagn {
using boost::scoped_ptr;
using boost::scoped_array;
/**
* Helper for a diagnostic BufferProvider:
* A block of heap allocated storage, with the capability
* to store some additional tracking information.
*/
class Block
: boost::noncopyable
{
scoped_array<char> storage_;
bool was_released_;
public:
explicit
Block(size_t bufferSize)
: storage_(bufferSize? new char[bufferSize] : NULL)
, was_released_(false)
{ }
bool
was_used() const
{
return bool(storage_);
}
bool
was_closed() const
{
return was_released_;
}
void*
accessMemory() const
{
REQUIRE (storage_, "Block was never prepared for use");
return storage_.get();
}
void
markReleased()
{
was_released_ = true;
}
};
class BlockPool;
typedef std::tr1::unordered_map<HashVal,BlockPool> PoolTable;
}
/**
* simple BufferProvider implementation with additional allocation tracking.
* @internal used as PImpl by DiagnosticBufferProvider and DiagnosticOutputSlot.
*
* This dummy implementation of the BufferProvider interface uses a linearly growing
* table of heap allocated buffer blocks, which will never be discarded, unless the object
* is discarded as a whole. There is an additional testing/diagnostics API to access the
* tracked usage information, even when blocks are already marked as "released".
*/
class TrackingHeapBlockProvider
: public BufferProvider
{
scoped_ptr<diagn::PoolTable> pool_;
ScopedPtrVect<diagn::Block> outSeq_;
public:
/* === BufferProvider interface === */
virtual uint prepareBuffers (uint count, HashVal typeID);
virtual BuffHandle provideLockedBuffer (HashVal typeID);
virtual void mark_emitted (HashVal entryID, LocalKey const&);
virtual void detachBuffer (HashVal entryID, LocalKey const&);
public:
TrackingHeapBlockProvider();
virtual ~TrackingHeapBlockProvider();
size_t emittedCnt() const;
diagn::Block& access_emitted (uint bufferID);
template<typename TY>
TY& accessAs (uint bufferID);
private:
bool withinOutputSequence (uint bufferID) const;
diagn::BlockPool& getBlockPoolFor (HashVal typeID);
diagn::Block* locateBlock (HashVal typeID, void*);
diagn::Block* searchInOutSeqeuence (void* storage);
};
/** convenience shortcut: access the buffer with the given number,
* then try to convert the raw memory to the templated type.
* @throw error::Invalid if the required fame number is beyond
* the number of buffers marked as "emitted"
* @throw error::Fatal if conversion is not possible or the
* conversion path chosen doesn't work (which might
* be due to RTTI indicating an incompatible type).
*/
template<typename TY>
TY&
TrackingHeapBlockProvider::accessAs (uint bufferID)
{
if (!withinOutputSequence (bufferID))
throw error::Invalid ("Buffer with the given ID not yet emitted");
diagn::Block& memoryBlock = access_emitted (bufferID);
TY* converted = reinterpret_cast<TY*> (memoryBlock.accessMemory());
REQUIRE (converted);
return *converted;
}
} // namespace engine
#endif

View file

@ -0,0 +1,182 @@
/*
TYPE-HANDLER.hpp - a functor pair for setup and destruction
Copyright (C) Lumiera.org
2011, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/** @file type-handler.hpp
** Helper holding a pair of type-build-up and destruction functors.
** Basically these two functors embody all type specific knowledge required
** to place an object into some buffer space and to clean up later. They may even
** be used in a more unspecific way, e.g. just to "prepare" a buffer or frame and to
** "clean up" after usage.
**
** Within the Lumiera Engine, the BufferProvider default implementation utilises instances
** of TypeHandler to \em describe specific buffer types capable of managing an attached object,
** or requiring some other kind of special treatment of the memory area used for the buffer.
** This BufferDescriptor is embodied into the BufferMetadata::Key and used later on to invoke
** the contained ctor / dtor functors, passing a concrete buffer (memory area).
**
** @see buffer-metadata.hpp
** @see buffer-provider.hpp
** @see BufferMetadataKey_test#verifyTypeHandler unit-test
*/
#ifndef PROC_ENGINE_TYPE_HANDLER_H
#define PROC_ENGINE_TYPE_HANDLER_H
#include "lib/error.hpp"
#include "lib/functor-util.hpp"
#include <tr1/functional>
#include <boost/functional/hash.hpp>
namespace engine {
using lib::HashVal;
using std::tr1::bind;
using std::tr1::function;
using std::tr1::placeholders::_1;
namespace error = lumiera::error;
namespace { // (optional) helpers to build an object embedded into a buffer...
template<class X>
inline void
buildIntoBuffer (void* storageBuffer)
{
new(storageBuffer) X();
}
template<class X, typename A1>
inline void
buildIntoBuffer_A1 (void* storageBuffer, A1 arg1)
{
new(storageBuffer) X(arg1);
}
template<class X>
inline void
destroyInBuffer (void* storageBuffer)
{
X* embedded = static_cast<X*> (storageBuffer);
embedded->~X();
}
}//(End)placement-new helpers
/**
* A pair of functors to maintain a datastructure within a buffer.
* TypeHandler describes how to outfit the buffer in a specific way.
* Special convenience builder function(s) are provided to create a
* TypeHandler performing placement-new into a buffer given on invocation.
* @note engine::BufferMetadata uses a TypeHandler to represent any
* special treatment of a buffer space. When defined, the buffer
* will be prepared on locking and cleanup will be invoked
* automatically when releasing.
* @warning comparison and hash values rely on internals of the
* tr1::function implementation and might not be 100% accurate
*/
struct TypeHandler
{
typedef function<void(void*)> DoInBuffer;
DoInBuffer createAttached;
DoInBuffer destroyAttached;
/** build an invalid NIL TypeHandler */
TypeHandler()
: createAttached()
, destroyAttached()
{ }
/** build a TypeHandler
* binding to arbitrary constructor and destructor functions.
* On invocation, these functions get a void* to the buffer.
* @note the functor objects created from these operations
* might be shared for handling multiple buffers.
* Be careful with any state or arguments.
*/
template<typename CTOR, typename DTOR>
TypeHandler(CTOR ctor, DTOR dtor)
: createAttached (ctor)
, destroyAttached (dtor)
{ }
/** builder function defining a TypeHandler
* to place a default-constructed object
* into the buffer. */
template<class X>
static TypeHandler
create ()
{
return TypeHandler (buildIntoBuffer<X>, destroyInBuffer<X>);
}
template<class X, typename A1>
static TypeHandler
create (A1 a1)
{
return TypeHandler ( bind (buildIntoBuffer_A1<X,A1>, _1, a1)
, destroyInBuffer<X>);
}
bool
isValid() const
{
return bool(createAttached)
&& bool(destroyAttached);
}
friend HashVal
hash_value (TypeHandler const& handler)
{
HashVal hash(0);
if (handler.isValid())
{
boost::hash_combine(hash, handler.createAttached);
boost::hash_combine(hash, handler.destroyAttached);
}
return hash;
}
friend bool
operator== (TypeHandler const& left, TypeHandler const& right)
{
return (!left.isValid() && !right.isValid())
|| ( util::rawComparison(left.createAttached, right.createAttached)
&& util::rawComparison(left.destroyAttached, right.destroyAttached)
);
}
friend bool
operator!= (TypeHandler const& left, TypeHandler const& right)
{
return !(left == right);
}
};
} // namespace engine
#endif

View file

@ -0,0 +1,212 @@
/*
OUTPUT-SLOT-CONNECTION.hpp - implementation API for concrete output slots
Copyright (C) Lumiera.org
2011, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
/** @file output-slot-connection.hpp
** Interface for concrete output implementations to talk to the OutputSlot frontend.
** The OutputSlot concept helps to decouple the render engine implementation from the details
** of handling external output connections. For this to work, a concrete implementation of such
** an external output needs to integrate with the generic OutputSlot frontend, as used by the
** engine. This generic frontend uses a PImpl, pointing to a ConnectionState object, which embodies
** the actual implementation. Moreover, this actual implementation is free to use specifically crafted
** OutputSlot::Connection elements to handle the ongoing output for individual channels. The latter
** thus becomes the central implementation side API for providing actual output capabilities.
**
** @see OutputSlotProtocol_test
** @see diagnostic-output-slot.hpp ////TODO
*/
#ifndef PROC_PLAY_OUTPUT_SLOT_CONNECTION_H
#define PROC_PLAY_OUTPUT_SLOT_CONNECTION_H
#include "lib/error.hpp"
#include "proc/play/output-slot.hpp"
#include "lib/handle.hpp"
//#include "lib/time/timevalue.hpp"
//#include "proc/engine/buffer-provider.hpp"
//#include "proc/play/timings.hpp"
#include "lib/iter-source.hpp"
#include "lib/iter-adapter-stl.hpp"
//#include "lib/sync.hpp"
#include <boost/noncopyable.hpp>
#include <boost/scoped_ptr.hpp>
//#include <string>
//#include <vector>
//#include <tr1/memory>
namespace proc {
namespace play {
using ::engine::BuffHandle;
//using ::engine::BufferProvider;
//using lib::time::Time;
//using std::string;
using lib::transform;
using lib::iter_stl::eachElm;
//using std::vector;
//using std::tr1::shared_ptr;
using boost::scoped_ptr;
/** @internal represents the \em active
* point in each of the per-channel connections
* used when this OutputSlot is operational.
*
* \par OutputSlot Core API
*
* Actually, this extension point towards the implementation
* of the actual output handling carries the core API of OutputSlot.
* Thus, the task of actually implementing an OutputSlot boils down
* to implementing this interface and providing a ConnectionState.
* - \c lock() announces this FrameID and the corresponding buffer
* to be in exclusive use by the client from now on
* - \c transfer() ends the client sided processing and initiates
* the outputting of the data found in the corresponding buffer.
* - \c pushout() actually pushes the denoted buffer to the output.
* Typically, \c pushout() is called from the \c transfer()
* implementation; yet it may as well be called from a separate
* service thread or some kind of callback.
* @note the meaning of FrameID is implementation defined.
*/
class OutputSlot::Connection
{
public:
virtual ~Connection();
virtual BuffHandle claimBufferFor(FrameID) =0;
virtual bool isTimely (FrameID, TimeValue) =0;
virtual void transfer (BuffHandle const&) =0;
virtual void pushout (BuffHandle const&) =0;
virtual void discard (BuffHandle const&) =0;
virtual void shutDown () =0;
};
/**
* Extension point for Implementation.
* The ConnectionState is where the concrete output
* handling implementation is expected to reside.
* OutputSlot is a frontend and accesses
* ConnectionState in the way of a PImpl.
*/
class OutputSlot::ConnectionState
: public OutputSlot::Allocation
, boost::noncopyable
{
public:
virtual ~ConnectionState() { }
};
/**
* Base class for the typical implementation approach.
* Using this class is \em not mandatory. But obviously,
* we'd get to manage a selection of Connection objects
* representing the "active points" in several media channels
* connected through this OutputSlot. These Connection subclasses
* are what is referenced by the DataSink smart-ptrs handed out
* to the client code. As ConnectionState implements the Allocation
* API, it has the liability to create these DataSink smart-ptrs,
* which means to wire them appropriately and also provide an
* deleter function (here #shutdownConnection) to be invoked
* when the last copy of the smart-handle goes out of scope.
*
* The typical standard/base implementation provided here
* manages a collection of active Connection subclass objects.
*/
template<class CON>
class ConnectionStateManager
: public OutputSlot::ConnectionState
, public vector<CON>
{
typedef OutputSlot::OpenedSinks OpenedSinks;
/* == Allocation Interface == */
OpenedSinks
getOpenedSinks()
{
REQUIRE (this->isActive());
return transform (eachElm(*this), connectOutputSink);
}
bool
isActive()
{
return 0 < vector<CON>::size();
}
public:
ConnectionStateManager()
{ }
virtual
~ConnectionStateManager()
{ }
void
init (uint numChannels)
{
for (uint i=0; i<numChannels; ++i)
push_back(buildConnection());
}
/** factory function to build the actual
* connection handling objects per channel */
virtual CON buildConnection() =0;
private: // Implementation details
static DataSink
connectOutputSink (CON& connection)
{
DataSink newSink;
newSink.activate(&connection, shutdownConnection);
return newSink;
}
static void
shutdownConnection (OutputSlot::Connection* toClose)
{
REQUIRE (toClose);
toClose->shutDown();
}
};
}} // namespace proc::play
#endif

View file

@ -21,13 +21,21 @@
* *****************************************************/
#include "lib/error.hpp"
#include "proc/play/output-slot.hpp"
#include "proc/play/output-slot-connection.hpp"
#include <boost/noncopyable.hpp>
#include <vector>
namespace proc {
namespace play {
using std::vector;
namespace error = lumiera::error;
namespace { // hidden local details of the service implementation....
@ -36,8 +44,17 @@ namespace play {
OutputSlot::~OutputSlot() { } // emit VTables here....
OutputSlot::Allocation::~Allocation() { }
OutputSlot::Connection::~Connection() { }
/** whether this output slot is occupied
@ -47,9 +64,53 @@ namespace play {
bool
OutputSlot::isFree() const
{
UNIMPLEMENTED ("connection state");
return ! this->state_;
}
/** */
OutputSlot::Allocation&
OutputSlot::allocate()
{
if (!isFree())
throw error::Logic ("Attempt to open/allocate an OutputSlot already in use.");
UNIMPLEMENTED ("internal interface to determine the number of channel-connections");
state_.reset (this->buildState());
return *state_;
}
void
OutputSlot::disconnect()
{
if (!isFree())
state_.reset(0);
}
/* === DataSink frontend === */
BuffHandle
DataSink::lockBufferFor(FrameID frameNr)
{
return impl().claimBufferFor(frameNr);
}
void
DataSink::emit (FrameID frameNr, BuffHandle const& data2emit, TimeValue currentTime)
{
OutputSlot::Connection& connection = impl();
if (connection.isTimely(frameNr,currentTime))
connection.transfer(data2emit);
else
connection.discard(data2emit);
}

View file

@ -45,10 +45,10 @@
//#include "lib/sync.hpp"
#include <boost/noncopyable.hpp>
#include <boost/scoped_ptr.hpp>
//#include <string>
//#include <vector>
//#include <tr1/memory>
//#include <boost/scoped_ptr.hpp>
namespace proc {
@ -56,29 +56,18 @@ namespace play {
using ::engine::BuffHandle;
using ::engine::BufferProvider;
using lib::time::Time;
using lib::time::TimeValue;
//using std::string;
//using std::vector;
//using std::tr1::shared_ptr;
//using boost::scoped_ptr;
using boost::scoped_ptr;
/** established output channel */
class Connection;
typedef int64_t FrameNr;
class DataSink
: public lib::Handle<Connection>
{
public:
BuffHandle lockBufferFor(FrameNr);
void emit(FrameNr);
};
class DataSink;
typedef int64_t FrameID;
@ -90,32 +79,46 @@ namespace play {
class OutputSlot
: boost::noncopyable
{
protected:
/** Table to maintain connection state */
class ConnectionState;
scoped_ptr<ConnectionState> state_;
virtual ConnectionState* buildState() =0;
public:
virtual ~OutputSlot();
typedef lib::IterSource<DataSink>::iterator OpenedSinks;
struct Allocation
class Allocation
{
OpenedSinks getOpenedSinks();
bool isActive();
public:
virtual OpenedSinks getOpenedSinks() =0;
virtual bool isActive() =0;
/////TODO add here the getters for timing constraints
protected:
~Allocation();
};
/** established output channel */
class Connection;
/** can this OutputSlot be allocated? */
bool isFree() const;
Allocation
allocate();
/** claim this slot for exclusive use */
Allocation& allocate();
protected:
friend class DataSink;
virtual void lock (FrameNr, uint channel) =0;
virtual void transfer (FrameNr, uint channel) =0;
virtual void pushout (FrameNr, uint channel) =0;
/** disconnect from this OutputSlot
* @warning may block until DataSinks are gone */
void disconnect();
private:
@ -123,5 +126,18 @@ namespace play {
class DataSink
: public lib::Handle<OutputSlot::Connection>
{
public:
BuffHandle lockBufferFor(FrameID);
void emit(FrameID, BuffHandle const&, TimeValue currentTime = Time::MAX); ///////////////TICKET #855
};
}} // namespace proc::play
#endif

View file

@ -20,53 +20,31 @@
// 1/11 - integer floor and wrap operation(s)
// 1/11 - how to fetch the path of the own executable -- at least under Linux?
// 10/11 - simple demo using a pointer and a struct
// 11/11 - using the boost random number generator(s)
#include <iostream>
#include <boost/lexical_cast.hpp>
#include <boost/random/linear_congruential.hpp>
using boost::lexical_cast;
using std::cout;
/**
* custom datastructure
* holding a constant char array with "hey"
*/
struct MyStruct
{
char data_[3];
const int length_;
MyStruct()
: length_(3)
{
const char *tmp = "hey";
for (int i=0; i<length_; ++i)
data_[i] = *(tmp+i);
}
};
// define a global variable holding a MyStruct
MyStruct theStruct;
void
printMyStruct(MyStruct* myPointer)
{
for (int i=0; i < myPointer->length_; ++i)
cout << myPointer->data_[i];
cout << "\n";
}
using std::endl;
int
main (int, char**) //(int argc, char* argv[])
main (int cnt, char* argv[])
{
printMyStruct (&theStruct);
int32_t seed = (2 == cnt)? lexical_cast<int32_t> (argv[1]) : 42;
boost::rand48 ranGen(seed);
cout << "seed = "<< seed << endl;
for (uint i=0; i< 100; ++i)
cout << ranGen() % CHAR_MAX <<"__";
cout << "\n.gulp.\n";

View file

@ -2,16 +2,27 @@ TESTING "Component Test Suite: Render Engine parts" ./test-components --group=en
PLANNED "Buffer provider diagnostics" BufferProviderProtocol_test <<END
TEST "Test support: dummy frames" TestFrame_test <<END
return: 0
END
PLANNED "buffer metadata type keys" BufferMetadataKey_test <<END
TEST "Test support: dummy buffer provider" TrackingHeapBlockProvider_test <<END
return: 0
END
PLANNED "buffer metadata and state transitions" BufferMetadata_test <<END
TEST "Buffer provider diagnostics" BufferProviderProtocol_test <<END
return: 0
END
TEST "buffer metadata type keys" BufferMetadataKey_test <<END
return: 0
END
TEST "buffer metadata and state transitions" BufferMetadata_test <<END
END

View file

@ -167,6 +167,13 @@ namespace test {
// serialise, then de-serialise into a new instance and compare both
}
int
twoRandomDigits()
{
return 10 + rand() % 90;
}
} // test-helper implementation
@ -244,7 +251,7 @@ namespace test {
arg3->storeTuple (tuple::make (rand() % 10, TimeVar(randTime())));
arg4->storeTuple (tuple::make (rand() % 10, TimeVar(randTime())));
arg5->storeTuple (tuple::make (TTime (randTime()), Tstr("glorious"), 10 + rand() % 90));
arg5->storeTuple (tuple::make (TTime (randTime()), Tstr("glorious"), twoRandomDigits() ));
CHECK (!arg5->canUndo());
@ -333,7 +340,7 @@ namespace test {
// store a set of parameter values, later to be used on invocation
args.storeTuple (
tuple::make (TTime(randTime()), Tstr("Lumiera rocks"), rand() % 100));
tuple::make (TTime(randTime()), Tstr("Lumiera rocks"), twoRandomDigits() ));
CHECK (!isnil (args));
cout << args << endl;
@ -377,7 +384,7 @@ namespace test {
protocol << "RESET...";
args.storeTuple (
tuple::make (TTime(TimeValue(123456)), Tstr("unbelievable"), rand() %100));
tuple::make (TTime(TimeValue(123456)), Tstr("unbelievable"), twoRandomDigits() ));
cout << "modified: " << args << endl;
cout << "copied : " << argsCopy << endl; // holds still the old params & memento

View file

@ -24,37 +24,25 @@
#include "lib/error.hpp"
#include "lib/test/run.hpp"
#include "lib/test/test-helper.hpp"
//#include "lib/util-foreach.hpp"
#include "lib/util.hpp"
//#include "proc/play/diagnostic-output-slot.hpp"
//#include "proc/engine/testframe.hpp"
//#include "proc/engine/diagnostic-buffer-provider.hpp"
#include "proc/engine/buffer-metadata.hpp"
#include "proc/engine/testframe.hpp"
//#include "proc/engine/buffhandle.hpp"
//#include "proc/engine/bufftable.hpp"
//#include <boost/format.hpp>
#include <boost/scoped_ptr.hpp>
#include <cstdlib>
//#include <iostream>
#include <cstring>
//using boost::format;
//using std::string;
//using std::cout;
//using util::for_each;
using std::strncpy;
using boost::scoped_ptr;
using util::isnil;
using lib::test::randStr;
using util::isSameObject;
using util::isnil;
namespace engine{
namespace test {
// using lib::AllocationCluster;
// using mobject::session::PEffect;
// using ::engine::BuffHandle;
using lumiera::error::LUMIERA_ERROR_FATAL;
using lumiera::error::LUMIERA_ERROR_INVALID;
using lumiera::error::LUMIERA_ERROR_LIFECYCLE;
@ -66,13 +54,23 @@ namespace test {
const size_t SIZE_A = 1 + rand() % TEST_MAX_SIZE;
const size_t SIZE_B = 1 + rand() % TEST_MAX_SIZE;
const HashVal JUST_SOMETHING = 123;
const void* const SOME_POINTER = &JUST_SOMETHING;
// const uint TEST_SIZE = 1024*1024;
// const uint TEST_ELMS = 20;
HashVal JUST_SOMETHING = 123;
void* const SOME_POINTER = &JUST_SOMETHING;
}
template<typename TY>
TY&
accessAs (metadata::Entry& entry)
{
TY* ptr = reinterpret_cast<TY*> (entry.access());
ASSERT (ptr);
return *ptr;
}
}//(End) Test fixture and helpers
/*******************************************************************
@ -83,7 +81,7 @@ namespace test {
class BufferMetadata_test : public Test
{
/** common Metadata table to be tested */
scoped_ptr<Metadata> meta_;
scoped_ptr<BufferMetadata> meta_;
virtual void
run (Arg)
@ -91,7 +89,7 @@ namespace test {
CHECK (ensure_proper_fixture());
verifyBasicProperties();
verifyStandardCase();
UNIMPLEMENTED ("cover all metadata properties");
verifyStateMachine();
}
@ -99,7 +97,7 @@ namespace test {
ensure_proper_fixture()
{
if (!meta_)
meta_.reset(new Metadata("BufferMetadata_test"));
meta_.reset(new BufferMetadata("BufferMetadata_test"));
return (SIZE_A != SIZE_B)
&& (JUST_SOMETHING != meta_->key(SIZE_A))
@ -112,11 +110,11 @@ namespace test {
verifyBasicProperties()
{
// retrieve some type keys
Metadata::Key key = meta_->key(SIZE_A);
metadata::Key key = meta_->key(SIZE_A);
CHECK (key);
Metadata::Key key1 = meta_->key(SIZE_A);
Metadata::Key key2 = meta_->key(SIZE_B);
metadata::Key key1 = meta_->key(SIZE_A);
metadata::Key key2 = meta_->key(SIZE_B);
CHECK (key1);
CHECK (key2);
CHECK (key == key1);
@ -133,16 +131,16 @@ namespace test {
CHECK ( isSameObject (meta_->get(key), meta_->get(key1)));
CHECK (!isSameObject (meta_->get(key), meta_->get(key2)));
// entries retrieved this far are inactive (type only) entries
Metadata::Entry& m1 = meta_->get(key);
// entries retrieved thus far were inactive (type only) entries
metadata::Entry& m1 = meta_->get(key);
CHECK (NIL == m1.state());
CHECK (!meta_->isLocked(key));
VERIFY_ERROR (LIFECYCLE, m1.mark(EMITTED) );
VERIFY_ERROR (LIFECYCLE, m1.mark(LOCKED) );
VERIFY_ERROR (LIFECYCLE, m1.mark(EMITTED));
VERIFY_ERROR (LIFECYCLE, m1.mark(FREE) );
// now create an active (buffer) entry
Metadata::Entry& m2 = meta_->markLocked (key, SOME_POINTER);
metadata::Entry& m2 = meta_->markLocked (key, SOME_POINTER);
CHECK (!isSameObject (m1,m2));
CHECK (NIL == m1.state());
CHECK (LOCKED == m2.state());
@ -173,7 +171,7 @@ namespace test {
CHECK ( meta_->isKnown(keyX));
CHECK ( meta_->isKnown(key1));
VERIFY_ERROR (LIFECYCLE, m2.access());
VERIFY_ERROR (LIFECYCLE, m2.mark(LOCKED));
VERIFY_ERROR (FATAL, m2.mark(LOCKED)); // buffer missing
CHECK ( isSameObject (m2, meta_->get(keyX))); // still accessible
// release buffer...
@ -189,18 +187,18 @@ namespace test {
* @note to get the big picture, please refer to
* BufferProviderProtocol_test#verifyStandardCase()
* This testcase here performs precisely the metadata related
* operations necessary to carry out the standard case outlined
* in that more high level test.
* operations necessary to carry out the standard case
* outlined on a higher level in the mentioned test.
*/
void
verifyStandardCase()
{
// to build a descriptor for a buffer holding a TestFrame
TypeHandler attachTestFrame = TypeHandler::create<TestFrame>();
Metadata::Key bufferType1 = meta_->key(sizeof(TestFrame), attachTestFrame);
metadata::Key bufferType1 = meta_->key(sizeof(TestFrame), attachTestFrame);
// to build a descriptor for a raw buffer of size SIZE_B
Metadata::Key rawBuffType = meta_->key(SIZE_B);
metadata::Key rawBuffType = meta_->key(SIZE_B);
// to announce using a number of buffers of this type
LocalKey transaction1(1);
@ -219,23 +217,25 @@ namespace test {
// a real-world BufferProvider would use some kind of allocator
// track individual buffers by metadata entries
Metadata::Entry f0 = meta_->markLocked(bufferType1, &frames[0]);
Metadata::Entry f1 = meta_->markLocked(bufferType1, &frames[1]);
Metadata::Entry f2 = meta_->markLocked(bufferType1, &frames[2]);
metadata::Entry& f0 = meta_->markLocked(bufferType1, &frames[0]);
metadata::Entry& f1 = meta_->markLocked(bufferType1, &frames[1]);
metadata::Entry& f2 = meta_->markLocked(bufferType1, &frames[2]);
metadata::Entry& r0 = meta_->markLocked(rawBuffType, &rawbuf[0]);
metadata::Entry& r1 = meta_->markLocked(rawBuffType, &rawbuf[1]);
Metadata::Entry r0 = meta_->markLocked(bufferType1, &rawbuf[0]);
Metadata::Entry r1 = meta_->markLocked(bufferType1, &rawbuf[1]);
CHECK (LOCKED == f0.state());
CHECK (LOCKED == f1.state());
CHECK (LOCKED == f2.state());
CHECK (LOCKED == r0.state());
CHECK (LOCKED == r1.state());
// for the TestFrame buffers, additionally we'd have to create/attach an object
attachTestFrame.createAttached (frames+0); ////////////////////////////////////////TODO: shouldn't this happen automatically??
attachTestFrame.createAttached (frames+1);
attachTestFrame.createAttached (frames+2);
CHECK (transaction1 == f0.localKey());
CHECK (transaction1 == f1.localKey());
CHECK (transaction1 == f2.localKey());
CHECK (transaction2 == r0.localKey());
CHECK (transaction2 == r1.localKey());
CHECK (f0.access() == frames+0);
CHECK (f1.access() == frames+1);
@ -243,6 +243,11 @@ namespace test {
CHECK (r0.access() == rawbuf+0);
CHECK (r1.access() == rawbuf+1);
TestFrame defaultFrame;
CHECK (defaultFrame == f0.access());
CHECK (defaultFrame == f1.access());
CHECK (defaultFrame == f2.access());
// at that point, we'd return BuffHandles to the client
HashVal handle_f0(f0);
HashVal handle_f1(f1);
@ -250,14 +255,35 @@ namespace test {
HashVal handle_r0(r0);
HashVal handle_r1(r1);
// client uses the buffers
// client uses the buffers---------------------(Start)
accessAs<TestFrame> (f0) = testData(1);
accessAs<TestFrame> (f1) = testData(2);
accessAs<TestFrame> (f2) = testData(3);
//////////////////TODO: access the storage through the metadata-key
//////////////////TODO: to a state transition on the metadata
CHECK (testData(1) == frames[0]);
CHECK (testData(2) == frames[1]);
CHECK (testData(3) == frames[2]);
CHECK (TestFrame::isAlive (f0.access()));
CHECK (TestFrame::isAlive (f1.access()));
CHECK (TestFrame::isAlive (f2.access()));
strncpy (& accessAs<char> (r0), randStr(SIZE_B - 1).c_str(), SIZE_B);
strncpy (& accessAs<char> (r1), randStr(SIZE_B - 1).c_str(), SIZE_B);
// client might trigger some state transitions
f0.mark(EMITTED);
f1.mark(EMITTED);
f1.mark(BLOCKED);
// client uses the buffers---------------------(End)
f0.mark(FREE); // note: implicitly invoking the embedded dtor
f1.mark(FREE);
f2.mark(FREE);
r0.mark(FREE);
r1.mark(FREE);
attachTestFrame.destroyAttached (frames+0); ////////////////////////////////////////TODO: shouldn't this happen automatically??
attachTestFrame.destroyAttached (frames+1);
attachTestFrame.destroyAttached (frames+2);
meta_->release(handle_f0);
meta_->release(handle_f1);
@ -265,14 +291,106 @@ namespace test {
meta_->release(handle_r0);
meta_->release(handle_r1);
CHECK (TestFrame::isDead (&frames[0])); // was destroyed implicitly
CHECK (TestFrame::isDead (&frames[1]));
CHECK (TestFrame::isDead (&frames[2]));
// manual cleanup of test allocations
delete[] frames;
delete[] rawbuf;
CHECK (!meta_->isLocked(handle_f0));
CHECK (!meta_->isLocked(handle_f1));
CHECK (!meta_->isLocked(handle_f2));
CHECK (!meta_->isLocked(handle_r0));
CHECK (!meta_->isLocked(handle_r1));
}
void
verifyStateMachine()
{
// start with building a type key....
metadata::Key key = meta_->key(SIZE_A);
CHECK (NIL == meta_->get(key).state());
CHECK (meta_->get(key).isTypeKey());
CHECK (!meta_->isLocked(key));
#if false /////////////////////////////////////////////////////////////////////////////////////////////////////////////UNIMPLEMENTED :: TICKET #834
#endif /////////////////////////////////////////////////////////////////////////////////////////////////////////////UNIMPLEMENTED :: TICKET #834
VERIFY_ERROR (LIFECYCLE, meta_->get(key).mark(LOCKED) );
VERIFY_ERROR (LIFECYCLE, meta_->get(key).mark(EMITTED));
VERIFY_ERROR (LIFECYCLE, meta_->get(key).mark(BLOCKED));
VERIFY_ERROR (LIFECYCLE, meta_->get(key).mark(FREE) );
VERIFY_ERROR (LIFECYCLE, meta_->get(key).mark(NIL) );
// now build a concrete buffer entry
metadata::Entry& entry = meta_->markLocked(key, SOME_POINTER);
CHECK (LOCKED == entry.state());
CHECK (!entry.isTypeKey());
CHECK (SOME_POINTER == entry.access());
VERIFY_ERROR (FATAL, entry.mark(LOCKED) ); // invalid state transition
VERIFY_ERROR (FATAL, entry.mark(NIL) );
entry.mark (EMITTED); // valid transition
CHECK (EMITTED == entry.state());
CHECK (entry.isLocked());
VERIFY_ERROR (FATAL, entry.mark(LOCKED) );
VERIFY_ERROR (FATAL, entry.mark(EMITTED));
VERIFY_ERROR (FATAL, entry.mark(NIL) );
CHECK (EMITTED == entry.state());
entry.mark (FREE);
CHECK (FREE == entry.state());
CHECK (!entry.isLocked());
CHECK (!entry.isTypeKey());
VERIFY_ERROR (LIFECYCLE, entry.access() );
VERIFY_ERROR (FATAL, entry.mark(LOCKED) );
VERIFY_ERROR (FATAL, entry.mark(EMITTED));
VERIFY_ERROR (FATAL, entry.mark(BLOCKED));
VERIFY_ERROR (FATAL, entry.mark(FREE) );
VERIFY_ERROR (FATAL, entry.mark(NIL) );
// re-use buffer slot, start new lifecycle
void* OTHER_LOCATION = this;
entry.lock (OTHER_LOCATION);
CHECK (LOCKED == entry.state());
CHECK (entry.isLocked());
VERIFY_ERROR (LIFECYCLE, entry.lock(SOME_POINTER));
entry.mark (BLOCKED); // go directly to the blocked state
CHECK (BLOCKED == entry.state());
VERIFY_ERROR (FATAL, entry.mark(LOCKED) );
VERIFY_ERROR (FATAL, entry.mark(EMITTED) );
VERIFY_ERROR (FATAL, entry.mark(BLOCKED) );
VERIFY_ERROR (FATAL, entry.mark(NIL) );
CHECK (OTHER_LOCATION == entry.access());
entry.mark (FREE);
CHECK (!entry.isLocked());
VERIFY_ERROR (LIFECYCLE, entry.access() );
meta_->lock(key, SOME_POINTER);
CHECK (entry.isLocked());
entry.mark (EMITTED);
entry.mark (BLOCKED);
CHECK (BLOCKED == entry.state());
CHECK (SOME_POINTER == entry.access());
// can't discard metadata, need to free first
VERIFY_ERROR (LIFECYCLE, meta_->release(entry) );
CHECK (meta_->isKnown(entry));
CHECK (entry.isLocked());
entry.mark (FREE);
meta_->release(entry);
CHECK (!meta_->isKnown(entry));
CHECK ( meta_->isKnown(key));
}
};

View file

@ -24,29 +24,25 @@
#include "lib/error.hpp"
#include "lib/test/run.hpp"
#include "lib/test/test-helper.hpp"
#include "lib/test/testdummy.hpp"
#include "lib/util-foreach.hpp"
//#include "proc/play/diagnostic-output-slot.hpp"
#include "proc/engine/testframe.hpp"
#include "proc/engine/diagnostic-buffer-provider.hpp"
#include "proc/engine/buffhandle.hpp"
#include "proc/engine/buffhandle-attach.hpp"
#include "proc/engine/bufftable.hpp"
//#include <boost/format.hpp>
//#include <iostream>
//using boost::format;
//using std::string;
//using std::cout;
using util::isSameObject;
using util::for_each;
namespace engine{
namespace test {
// using lib::AllocationCluster;
// using mobject::session::PEffect;
using lib::test::Dummy;
using ::engine::BuffHandle;
using lumiera::error::LUMIERA_ERROR_LIFECYCLE;
using error::LUMIERA_ERROR_LOGIC;
using error::LUMIERA_ERROR_LIFECYCLE;
namespace { // Test fixture
@ -64,21 +60,26 @@ namespace test {
}
/*******************************************************************
* @test verify the OutputSlot interface and base implementation
* by performing full data exchange cycle. This is a
* kind of "dry run" for documentation purposes,
* both the actual OutputSlot implementation
* as the client using this slot are Mocks.
/******************************************************************************
* @test verify and demonstrate the usage cycle of data buffers for the engine
* based on the BufferProvider interface. This is kind of a "dry run"
* for documentation purposes, because the BufferProvider implementation
* used here is just a diagnostics facility, allowing to investigate
* the state of individual buffers even after "releasing" them.
*
* This test should help understanding the sequence of buffer management
* operations performed at various stages while passing an calculation job
* through the render engine.
*/
class BufferProviderProtocol_test : public Test
{
virtual void
run (Arg)
{
UNIMPLEMENTED ("build a diagnostic buffer provider and perform a full lifecycle");
verifySimpleUsage();
verifyStandardCase();
verifyObjectAttachment();
verifyObjectAttachmentFailure();
}
@ -93,11 +94,12 @@ namespace test {
BuffHandle buff = provider.lockBufferFor<TestFrame>();
CHECK (buff.isValid());
CHECK (sizeof(TestFrame) <= buff.size());
buff.create<TestFrame>() = testData(0);
buff.accessAs<TestFrame>() = testData(0);
TestFrame& storage = buff.accessAs<TestFrame>();
CHECK (testData(0) == storage);
TestFrame& content = buff.accessAs<TestFrame>();
CHECK (testData(0) == content);
buff.emit();
buff.release();
CHECK (!buff.isValid());
VERIFY_ERROR (LIFECYCLE, buff.accessAs<TestFrame>() );
@ -105,8 +107,6 @@ namespace test {
DiagnosticBufferProvider& checker = DiagnosticBufferProvider::access(provider);
CHECK (checker.buffer_was_used (0));
CHECK (checker.buffer_was_closed (0));
CHECK (checker.object_was_attached<TestFrame> (0));
CHECK (checker.object_was_destroyed<TestFrame> (0));
CHECK (testData(0) == checker.accessMemory (0));
}
@ -147,6 +147,83 @@ namespace test {
CHECK (checker.all_buffers_released());
#endif /////////////////////////////////////////////////////////////////////////////////////////////////////////////UNIMPLEMENTED :: TICKET #829
}
void
verifyObjectAttachment()
{
BufferProvider& provider = DiagnosticBufferProvider::build();
BufferDescriptor type_A = provider.getDescriptorFor(sizeof(TestFrame));
BufferDescriptor type_B = provider.getDescriptorFor(sizeof(int));
BufferDescriptor type_C = provider.getDescriptor<int>();
BuffHandle handle_A = provider.lockBuffer(type_A);
BuffHandle handle_B = provider.lockBuffer(type_B);
BuffHandle handle_C = provider.lockBuffer(type_C);
CHECK (handle_A);
CHECK (handle_B);
CHECK (handle_C);
CHECK (sizeof(TestFrame) == handle_A.size());
CHECK (sizeof( int ) == handle_B.size());
CHECK (sizeof( int ) == handle_C.size());
TestFrame& embeddedFrame = handle_A.create<TestFrame>();
CHECK (isSameObject (*handle_A, embeddedFrame));
CHECK (embeddedFrame.isAlive());
CHECK (embeddedFrame.isSane());
VERIFY_ERROR (LOGIC, handle_B.create<TestFrame>()); // too small to hold a TestFrame
VERIFY_ERROR (LIFECYCLE, handle_C.create<int>()); // has already an attached TypeHandler (creating an int)
handle_A.release();
handle_B.release();
handle_C.release();
CHECK (embeddedFrame.isDead());
CHECK (embeddedFrame.isSane());
}
void
verifyObjectAttachmentFailure()
{
BufferProvider& provider = DiagnosticBufferProvider::build();
BufferDescriptor type_D = provider.getDescriptorFor(sizeof(Dummy));
Dummy::checksum() = 0;
BuffHandle handle_D = provider.lockBuffer(type_D);
CHECK (0 == Dummy::checksum()); // nothing created thus far
handle_D.create<Dummy>();
CHECK (0 < Dummy::checksum());
handle_D.release();
CHECK (0 == Dummy::checksum());
BuffHandle handle_DD = provider.lockBuffer(type_D);
CHECK (0 == Dummy::checksum());
Dummy::activateCtorFailure();
CHECK (handle_DD.isValid());
try
{
handle_DD.create<Dummy>();
NOTREACHED ("Dummy ctor should fail");
}
catch (int val)
{
CHECK (!handle_DD.isValid());
CHECK (0 < Dummy::checksum());
CHECK (val == Dummy::checksum());
}
VERIFY_ERROR (LIFECYCLE, handle_DD.accessAs<Dummy>() );
VERIFY_ERROR (LIFECYCLE, handle_DD.create<Dummy>() );
}
};

View file

@ -93,7 +93,7 @@ namespace test {
ModelPort port(pipe);
OutputSlot& oSlot = DiagnosticOutputSlot::build();
Allocation output = oSlot.allocate();
Allocation& output = oSlot.allocate();
Timings timings; /////////TODO
// Invoke test subject...

View file

@ -26,7 +26,7 @@
#include "proc/engine/nodefactory.hpp"
#include "proc/engine/nodewiring.hpp"
#include "proc/engine/stateproxy.hpp"
#include "proc/engine/buffhandle.hpp"
#include "proc/engine/channel-descriptor.hpp"
#include "proc/mobject/session/effect.hpp"
#include "lib/allocation-cluster.hpp"

View file

@ -0,0 +1,208 @@
/*
TestFrame(Test) - verify proper operation of dummy data frames
Copyright (C) Lumiera.org
2011, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* *****************************************************/
#include "lib/test/run.hpp"
#include "proc/engine/testframe.hpp"
#include "lib/util.hpp"
#include <cstdlib>
#include <limits.h>
#include <boost/scoped_ptr.hpp>
using test::Test;
using std::rand;
using util::isSameObject;
using boost::scoped_ptr;
namespace engine{
namespace test {
namespace { // used internally
const uint CHAN_COUNT = 30; // independent families of test frames to generate
const uint NUM_FRAMES = 1000; // number of test frames in each of these families
void
corruptMemory(void* base, uint offset, uint count)
{
char* accessor = reinterpret_cast<char*> (base);
while (count--)
accessor[offset+count] = rand() % CHAR_MAX;
}
} // (End) internal defs
/*******************************************************************
* @test verify test helper for engine tests: a dummy data frame.
* TestFrame instances can be created right away, without any
* external library dependencies. A test frame is automatically
* filled with random data; multiple frames are arranged in
* sequences and channels, causing the random data to be
* reproducible yet different in each frame.
*
* To ease writing unit tests, TestFrame provides comparison
* and assignment and tracks lifecycle automatically. As tests
* regarding the engine typically have to deal with buffer
* management, an arbitrary memory location can be interpreted
* as TestFrame and checked for corruption.
*/
class TestFrame_test : public Test
{
virtual void
run (Arg)
{
verifyBasicProperties();
verifyFrameLifecycle();
verifyFrameSeries();
useFrameTable();
}
void
verifyBasicProperties()
{
CHECK (1024 < sizeof(TestFrame));
TestFrame frameA;
TestFrame frameB;
TestFrame frameC(5);
CHECK (frameA == frameB);
CHECK (frameA != frameC);
CHECK (frameB != frameC);
CHECK (frameA.isAlive());
CHECK (frameB.isAlive());
CHECK (frameC.isAlive());
CHECK (frameA.isSane());
CHECK (frameB.isSane());
CHECK (frameC.isSane());
void * frameMem = &frameB;
CHECK (frameA == frameMem);
corruptMemory(frameMem,20,5);
CHECK (!frameB.isSane());
frameB = frameC;
CHECK (frameB.isSane());
CHECK (frameA != frameB);
CHECK (frameA != frameC);
CHECK (frameB == frameC);
}
void
verifyFrameLifecycle()
{
CHECK (!TestFrame::isDead (this));
CHECK (!TestFrame::isAlive (this));
TestFrame* onHeap = new TestFrame(23);
CHECK ( TestFrame::isAlive (onHeap));
CHECK (!onHeap->isDead());
CHECK (onHeap->isAlive());
CHECK (onHeap->isSane());
delete onHeap;
CHECK ( TestFrame::isDead (onHeap));
CHECK (!TestFrame::isAlive (onHeap));
}
/** @test build sequences of test frames,
* organised into multiple families (channels).
* Verify that adjacent frames hold differing data
*/
void
verifyFrameSeries()
{
scoped_ptr<TestFrame> thisFrames[CHAN_COUNT];
scoped_ptr<TestFrame> prevFrames[CHAN_COUNT];
for (uint i=0; i<CHAN_COUNT; ++i)
thisFrames[i].reset (new TestFrame(0, i));
for (uint nr=1; nr<NUM_FRAMES; ++nr)
for (uint i=0; i<CHAN_COUNT; ++i)
{
thisFrames[i].swap (prevFrames[i]);
thisFrames[i].reset (new TestFrame(nr, i));
CHECK (thisFrames[i]->isSane());
CHECK (prevFrames[i]->isSane());
CHECK (prevFrames[i]->isAlive());
CHECK (*thisFrames[i] != *prevFrames[i]); // differs from predecessor within the same channel
for (uint j=0; j<i; ++j)
{
ENSURE (j!=i);
CHECK (*thisFrames[i] != *thisFrames[j]); // differs from frames in other channels at this point
CHECK (*thisFrames[i] != *prevFrames[j]); // differs cross wise from predecessors in other channels
} } }
/** @test the table of test frames
* computed on demand */
void
useFrameTable()
{
TestFrame& frX = testData(3,50);
TestFrame& frY = testData(3,25);
TestFrame& frZ = testData(3,50);
CHECK (frX.isSane());
CHECK (frY.isSane());
CHECK (frZ.isSane());
CHECK (frX != frY);
CHECK (frX == frZ);
CHECK (frY != frZ);
CHECK (isSameObject (frX, frZ));
corruptMemory(&frZ,40,20);
CHECK (!frX.isSane());
CHECK (!testData(3,50).isSane());
CHECK ( testData(3,51).isSane());
CHECK ( testData(3,49).isSane());
resetTestFrames();
CHECK ( testData(3,50).isSane());
}
};
/** Register this test class... */
LAUNCHER (TestFrame_test, "unit engine");
}} // namespace engine::test

View file

@ -0,0 +1,292 @@
/*
TestFrame - test data frame (stub) for checking Render engine functionality
Copyright (C) Lumiera.org
2011, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* *****************************************************/
#include "proc/engine/testframe.hpp"
#include "lib/error.hpp"
#include <boost/random/linear_congruential.hpp>
#include <boost/scoped_ptr.hpp>
#include <limits.h>
#include <cstring>
#include <vector>
namespace engine {
namespace test {
using std::vector;
using std::memcpy;
typedef boost::rand48 PseudoRandom;
namespace error = lumiera::error;
namespace { // hidden local support facilities....
/** @internal helper for generating unique test frames.
* This "discriminator" is used as a random seed when
* filling the test frame data buffers. It is generated
* to be different on adjacent frames of the same series,
* as well as to differ to all near by neighbouring channels.
* @param seq the sequence number of the frame within the channel
* @param family the channel this frame belongs to
*/
uint64_t
generateDistinction(uint seq, uint family)
{
// random offset, but fixed per executable run
static uint base(10 + rand() % 990);
// use the family as stepping
return (seq+1) * (base+family);
}
TestFrame&
accessAsTestFrame (void* memoryLocation)
{
REQUIRE (memoryLocation);
return *reinterpret_cast<TestFrame*> (memoryLocation);
}
/**
* @internal table to hold test data frames.
* These frames are built on demand, but retained thereafter.
* Some tests might rely on the actual memory locations, using the
* test frames to simulate a real input frame data stream.
* @param CHA the maximum number of channels to expect
* @param FRA the maximum number of frames to expect per channel
* @warning choose the maximum number parameters wisely.
* We're allocating memory to hold a table of test frames
* e.g. sizeof(TestFrame) * 20channels * 100frames 2 MiB
* The table uses vectors, and thus will grow on demand,
* but this might cause existing frames to be relocated in memory;
* some tests might rely on fixed memory locations. Just be cautious!
*/
template<uint CHA, uint FRA>
struct TestFrameTable
: vector<vector<TestFrame> >
{
typedef vector<vector<TestFrame> > VECT;
TestFrameTable()
: VECT(CHA)
{
for (uint i=0; i<CHA; ++i)
at(i).reserve(FRA);
}
TestFrame&
getFrame (uint seqNr, uint chanNr=0)
{
if (chanNr >= this->size())
{
WARN (test, "Growing table of test frames to %d channels, "
"which is > the default (%d)", chanNr, CHA);
resize(chanNr+1);
}
ENSURE (chanNr < this->size());
vector<TestFrame>& channel = at(chanNr);
if (seqNr >= channel.size())
{
WARN_IF (seqNr >= FRA, test,
"Growing channel #%d of test frames to %d elements, "
"which is > the default (%d)", chanNr, seqNr, FRA);
for (uint i=channel.size(); i<=seqNr; ++i)
channel.push_back (TestFrame (i,chanNr));
}
ENSURE (seqNr < channel.size());
return channel[seqNr];
}
};
const uint INITIAL_CHAN = 20;
const uint INITIAL_FRAMES = 100;
typedef TestFrameTable<INITIAL_CHAN,INITIAL_FRAMES> TestFrames;
boost::scoped_ptr<TestFrames> testFrames;
TestFrame&
accessTestFrame (uint seqNr, uint chanNr)
{
if (!testFrames) testFrames.reset (new TestFrames);
return testFrames->getFrame(seqNr,chanNr);
}
} // (End) hidden impl details
TestFrame&
testData (uint seqNr)
{
return accessTestFrame (seqNr, 0);
}
TestFrame&
testData (uint chanNr, uint seqNr)
{
return accessTestFrame (seqNr,chanNr);
}
void
resetTestFrames()
{
testFrames.reset(0);
}
/* ===== TestFrame class ===== */
TestFrame::~TestFrame()
{
stage_ = DISCARDED;
}
TestFrame::TestFrame(uint seq, uint family)
: distinction_(generateDistinction (seq,family))
, stage_(CREATED)
{
ASSERT (0 < distinction_);
buildData();
}
TestFrame::TestFrame (TestFrame const& o)
: distinction_(o.distinction_)
, stage_(CREATED)
{
memcpy (data_, o.data_, BUFFSIZ);
}
TestFrame&
TestFrame::operator= (TestFrame const& o)
{
if (DISCARDED == stage_)
throw new error::Logic ("target TestFrame is already dead");
if (this != &o)
{
distinction_ = o.distinction_;
stage_ = CREATED;
memcpy (data_, o.data_, BUFFSIZ);
}
return *this;
}
/** @note performing an unchecked conversion of the given
* memory location to be accessed as TestFrame.
* The sanity of the data found at that location
* is checked as well, not only the lifecycle flag.
*/
bool
TestFrame::isAlive (void* memLocation)
{
TestFrame& candidate (accessAsTestFrame (memLocation));
return candidate.isSane()
&& candidate.isAlive();
}
bool
TestFrame::isDead (void* memLocation)
{
TestFrame& candidate (accessAsTestFrame (memLocation));
return candidate.isSane()
&& candidate.isDead();
}
bool
TestFrame::operator== (void* memLocation) const
{
TestFrame& candidate (accessAsTestFrame (memLocation));
return candidate.isSane()
&& candidate == *this;
}
bool
TestFrame::contentEquals (TestFrame const& o) const
{
for (uint i=0; i<BUFFSIZ; ++i)
if (data_[i] != o.data_[i])
return false;
return true;
}
bool
TestFrame::verifyData() const
{
PseudoRandom gen(distinction_);
for (uint i=0; i<BUFFSIZ; ++i)
if (data_[i] != (gen() % CHAR_MAX))
return false;
return true;
}
void
TestFrame::buildData()
{
PseudoRandom gen(distinction_);
for (uint i=0; i<BUFFSIZ; ++i)
data_[i] = (gen() % CHAR_MAX);
}
bool
TestFrame::isAlive() const
{
return (CREATED == stage_)
|| (EMITTED == stage_);
}
bool
TestFrame::isDead() const
{
return (DISCARDED == stage_);
}
bool
TestFrame::isSane() const
{
return ( (CREATED == stage_)
||(EMITTED == stage_)
||(DISCARDED == stage_))
&& verifyData();
}
}} // namespace engine::test

View file

@ -25,21 +25,14 @@
#define PROC_ENGINE_TESTFRAME_H
//#include "lib/time/timevalue.hpp"
//#include <string>
//using std::tr1::shared_ptr;
//using std::string;
#include <cstdlib>
#include <stdint.h>
namespace engine {
namespace test {
//class TestPlacement;
/**
* Mock data frame for simulated rendering.
* A test frame can be created and placed instead of a real data frame.
@ -47,52 +40,80 @@ namespace test {
* Placeholder functions are provided for assignment (simulating the actual
* calculations); additional diagnostic functions allow to verify the
* performed operations after-the fact
*
* @todo WIP-WIP-WIP 9/11
*
* Each TestFrame is automatically filled with pseudo random data;
* multiple frames are arranged in sequences and channels, causing the random data
* to be reproducible yet different within each frame. TestFrame's lifecycle is
* tracked and marked in an embedded state field. Moreover, the contents of the
* data block can be verified, because the sequence of bytes is reproducible,
* based on the channel and sequence number of the test frame.
*
* @see TestFrame_test
* @see OutputSlotProtocol_test
*
*/
class TestFrame
{
enum StageOfLife {
CREATED, EMITTED, DISCARDED
};
static const size_t BUFFSIZ = 1024;
uint64_t distinction_;
StageOfLife stage_;
char data_[BUFFSIZ];
public:
~TestFrame();
TestFrame (uint seq=0, uint family=0);
TestFrame (TestFrame const&);
TestFrame& operator= (TestFrame const&);
bool
operator== (void* memLocation)
{
UNIMPLEMENTED ("verify contents of an arbitrary memory location");
}
/** Helper to verify that a given memory location holds
* an active TestFrame instance (created, not yet destroyed)
* @return true if the TestFrame datastructure is intact and
* marked as still alive.
*/
static bool isAlive (void* memLocation);
friend bool
operator== (TestFrame const& f1, TestFrame const& f2)
{
UNIMPLEMENTED ("equality of test data frames");
}
/** Helper to verify a given memory location holds
* an already destroyed TestFrame instance */
static bool isDead (void* memLocation);
friend bool
operator!= (TestFrame const& f1, TestFrame const& f2)
{
return !(f1 == f2);
}
bool isAlive() const;
bool isDead() const;
bool isSane() const;
bool operator== (void* memLocation) const;
friend bool operator== (TestFrame const& f1, TestFrame const& f2) { return f1.contentEquals(f2); }
friend bool operator!= (TestFrame const& f1, TestFrame const& f2) { return !f1.contentEquals(f2); }
private:
bool contentEquals (TestFrame const& o) const;
bool verifyData() const;
void buildData ();
};
inline TestFrame
testData (uint seqNr)
{
UNIMPLEMENTED ("build, memorise and expose test data frames on demand");
}
/** Helper to access a specific frame of test data at a fixed memory location.
* The series of test frames is generated on demand, but remains in memory thereafter,
* similar to real data accessible from some kind of source stream. Each of these generated
* test frames filled with different yet reproducible pseudo random data.
* Client code is free to access and corrupt this data.
*/
TestFrame& testData (uint seqNr);
inline TestFrame
testData (uint chanNr, uint seqNr)
{
UNIMPLEMENTED ("build, memorise and expose test data frames on demand (multi-channel)");
}
TestFrame& testData (uint chanNr, uint seqNr);
/* == some test data to check == */
// extern const lib::time::Duration LENGTH_TestClip;
/** discards all the TestFrame instances and
* initialises an empty table of test frames */
void resetTestFrames();
}} // namespace engine::test

View file

@ -0,0 +1,222 @@
/*u4
TrackingHeapBlockProvider(Test) - verify a support facility for diagnostic/test purposes
Copyright (C) Lumiera.org
2011, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* *****************************************************/
#include "lib/error.hpp"
#include "lib/test/run.hpp"
#include "proc/engine/tracking-heap-block-provider.hpp"
#include "proc/engine/buffhandle-attach.hpp"
#include "proc/engine/testframe.hpp"
#include <cstdlib>
#include <vector>
using std::rand;
namespace engine{
namespace test {
namespace { // Test fixture
const size_t TEST_ELM_SIZE = sizeof(uint);
const uint MAX_ELMS = 50;
std::vector<uint> testNumbers(MAX_ELMS);
bool
has_expectedContent (uint nr, diagn::Block& memoryBlock)
{
void* mem = memoryBlock.accessMemory();
uint data = *static_cast<uint*> (mem);
return data == testNumbers[nr];
}
bool
verifyUsedBlock (uint nr, diagn::Block& memoryBlock)
{
return memoryBlock.was_used()
&& memoryBlock.was_closed()
&& has_expectedContent (nr, memoryBlock);
}
}
/**********************************************************************
* @test verify a test support facility, used to write mock components
* to test the lumiera engine. The TrackingHeapBlockProvider is a
* braindead implementation of the BufferProvider interface: it just
* claims new heap blocks and never de-allocates them, allowing other
* test and mock objects to verify allocated buffers after the fact.
*/
class TrackingHeapBlockProvider_test : public Test
{
virtual void
run (Arg)
{
simpleExample();
verifyStandardCase();
verifyTestProtocol();
}
void
simpleExample()
{
TrackingHeapBlockProvider provider;
BuffHandle testBuff = provider.lockBufferFor<TestFrame>();
CHECK (testBuff);
CHECK (testBuff.accessAs<TestFrame>().isSane());
uint dataID = 1 + rand() % 29;
testBuff.accessAs<TestFrame>() = testData(dataID);
provider.emitBuffer (testBuff);
provider.releaseBuffer(testBuff);
diagn::Block& block0 = provider.access_emitted(0);
CHECK (testData(dataID) == block0.accessMemory());
}
void
verifyStandardCase()
{
TrackingHeapBlockProvider provider;
BufferDescriptor buffType = provider.getDescriptorFor(TEST_ELM_SIZE);
uint numElms = provider.announce(MAX_ELMS, buffType);
CHECK (0 < numElms);
CHECK (numElms <= MAX_ELMS);
for (uint i=0; i<numElms; ++i)
{
BuffHandle buff = provider.lockBuffer(buffType);
buff.accessAs<uint>() = testNumbers[i] = rand() % 100000;
provider.emitBuffer (buff);
provider.releaseBuffer(buff);
}
for (uint nr=0; nr<numElms; ++nr)
{
CHECK (verifyUsedBlock (nr, provider.access_emitted(nr)));
}
}
void
verifyTestProtocol()
{
TrackingHeapBlockProvider provider;
BufferDescriptor buffType = provider.getDescriptorFor(TEST_ELM_SIZE);
BuffHandle bu1 = provider.lockBuffer (buffType);
BuffHandle bu2 = provider.lockBuffer (buffType);
BuffHandle bu3 = provider.lockBuffer (buffType);
BuffHandle bu4 = provider.lockBuffer (buffType);
BuffHandle bu5 = provider.lockBuffer (buffType);
// buffers are locked,
// but still within the per-type allocation pool
// while the output sequence is still empty
CHECK (!provider.access_emitted(0).was_used());
CHECK (!provider.access_emitted(1).was_used());
CHECK (!provider.access_emitted(2).was_used());
CHECK (!provider.access_emitted(3).was_used());
CHECK (!provider.access_emitted(4).was_used());
// can use the buffers for real
bu1.accessAs<uint>() = 1;
bu2.accessAs<uint>() = 2;
bu3.accessAs<uint>() = 3;
bu4.accessAs<uint>() = 4;
bu5.accessAs<uint>() = 5;
CHECK (0 == provider.emittedCnt());
// now emit buffers in shuffled order
provider.emitBuffer (bu3);
provider.emitBuffer (bu1);
provider.emitBuffer (bu5);
provider.emitBuffer (bu4);
provider.emitBuffer (bu2);
CHECK (5 == provider.emittedCnt());
CHECK (3 == provider.accessAs<uint>(0));
CHECK (1 == provider.accessAs<uint>(1));
CHECK (5 == provider.accessAs<uint>(2));
CHECK (4 == provider.accessAs<uint>(3));
CHECK (2 == provider.accessAs<uint>(4));
CHECK ( provider.access_emitted(0).was_used());
CHECK ( provider.access_emitted(1).was_used());
CHECK ( provider.access_emitted(2).was_used());
CHECK ( provider.access_emitted(3).was_used());
CHECK ( provider.access_emitted(4).was_used());
CHECK (!provider.access_emitted(0).was_closed());
CHECK (!provider.access_emitted(1).was_closed());
CHECK (!provider.access_emitted(2).was_closed());
CHECK (!provider.access_emitted(3).was_closed());
CHECK (!provider.access_emitted(4).was_closed());
bu5.release();
CHECK (!provider.access_emitted(0).was_closed());
CHECK (!provider.access_emitted(1).was_closed());
CHECK ( provider.access_emitted(2).was_closed());
CHECK (!provider.access_emitted(3).was_closed());
CHECK (!provider.access_emitted(4).was_closed());
bu2.release();
bu2.release();
bu5.release();
CHECK (!provider.access_emitted(0).was_closed());
CHECK (!provider.access_emitted(1).was_closed());
CHECK ( provider.access_emitted(2).was_closed());
CHECK (!provider.access_emitted(3).was_closed());
CHECK ( provider.access_emitted(4).was_closed());
CHECK (!bu2);
CHECK (bu3);
bu1.release();
bu3.release();
bu4.release();
CHECK (5 == provider.emittedCnt());
}
};
/** Register this test class... */
LAUNCHER (TrackingHeapBlockProvider_test, "unit player");
}} // namespace engine::test

View file

@ -32,15 +32,19 @@
#include "lib/error.hpp"
#include "include/logging.h"
#include "proc/play/output-slot.hpp"
#include "proc/play/output-slot-connection.hpp"
#include "proc/engine/buffhandle.hpp"
#include "proc/engine/tracking-heap-block-provider.hpp"
#include "lib/iter-source.hpp" ////////////TODO really going down that path...?
#include "proc/engine/testframe.hpp"
//#include "lib/sync.hpp"
//#include <boost/noncopyable.hpp>
#include <boost/noncopyable.hpp>
//#include <string>
//#include <vector>
//#include <tr1/memory>
#include <tr1/memory>
//#include <boost/scoped_ptr.hpp>
@ -48,14 +52,105 @@ namespace proc {
namespace play {
//using std::string;
using ::engine::BufferDescriptor;
using ::engine::test::TestFrame;
using ::engine::TrackingHeapBlockProvider;
//using std::vector;
//using std::tr1::shared_ptr;
using std::tr1::shared_ptr;
//using boost::scoped_ptr;
class TrackingInMemoryBlockSequence
: public OutputSlot::Connection
{
shared_ptr<TrackingHeapBlockProvider> buffProvider_;
BufferDescriptor bufferType_;
/* === Connection API === */
BuffHandle
claimBufferFor(FrameID frameNr)
{
return buffProvider_->lockBuffer (bufferType_);
}
bool
isTimely (FrameID frameNr, TimeValue currentTime)
{
if (Time::MAX == currentTime)
return true;
UNIMPLEMENTED ("find out about timings");
return false;
}
void
transfer (BuffHandle const& filledBuffer)
{
pushout (filledBuffer);
}
void
pushout (BuffHandle const& data4output)
{
buffProvider_->emitBuffer (data4output);
buffProvider_->releaseBuffer(data4output);
}
void
discard (BuffHandle const& superseededData)
{
buffProvider_->releaseBuffer (superseededData);
}
void
shutDown ()
{
buffProvider_.reset();
}
public:
TrackingInMemoryBlockSequence()
: buffProvider_(new TrackingHeapBlockProvider())
, bufferType_(buffProvider_->getDescriptor<TestFrame>())
{
INFO (engine_dbg, "building in-memory diagnostic output sequence");
}
virtual
~TrackingInMemoryBlockSequence()
{
INFO (engine_dbg, "releasing diagnostic output sequence");
}
};
class SimulatedOutputSequences
: public ConnectionStateManager<TrackingInMemoryBlockSequence>
, boost::noncopyable
{
TrackingInMemoryBlockSequence
buildConnection()
{
return TrackingInMemoryBlockSequence();
}
public:
SimulatedOutputSequences (uint numChannels)
{
init (numChannels);
}
};
/********************************************************************
* Helper for unit tests: Mock output sink.
*
@ -64,6 +159,17 @@ namespace play {
class DiagnosticOutputSlot
: public OutputSlot
{
static const uint MAX_CHANNELS = 5;
/* === hook into the OutputSlot frontend === */
ConnectionState*
buildState()
{
return new SimulatedOutputSequences(MAX_CHANNELS);
}
public:
/** build a new Diagnostic Output Slot instance,
* discard the existing one. Use the static query API
@ -105,28 +211,28 @@ namespace play {
bool
buffer_was_used (uint channel, FrameNr frame)
buffer_was_used (uint channel, FrameID frame)
{
UNIMPLEMENTED ("determine if the denoted buffer was indeed used");
}
bool
buffer_unused (uint channel, FrameNr frame)
buffer_unused (uint channel, FrameID frame)
{
UNIMPLEMENTED ("determine if the specified buffer was never touched/locked for use");
}
bool
buffer_was_closed (uint channel, FrameNr frame)
buffer_was_closed (uint channel, FrameID frame)
{
UNIMPLEMENTED ("determine if the specified buffer was indeed closed properly");
}
bool
emitted (uint channel, FrameNr frame)
emitted (uint channel, FrameID frame)
{
UNIMPLEMENTED ("determine if the specivied buffer was indeed handed over for emitting output");
}

View file

@ -71,7 +71,6 @@ namespace test {
void
verifyStandardCase()
{
#if false /////////////////////////////////////////////////////////////////////////////////////////////////////////////UNIMPLEMENTED :: TICKET #819
// Create Test fixture.
// In real usage, the OutputSlot will be preconfigured
// (Media format, number of channels, physical connections)
@ -80,7 +79,7 @@ namespace test {
// Client claims the OutputSlot
// and opens it for exclusive use.
OutputSlot::Allocation alloc = oSlot.allocate();
OutputSlot::Allocation& alloc = oSlot.allocate();
// Now the client is able to prepare
// "calculation streams" for the individual
@ -107,9 +106,9 @@ namespace test {
buff10.accessAs<TestFrame>() = testData(1,0);
// Now it's time to emit the output
sink2.emit (frameNr-1);
sink2.emit (frameNr );
sink1.emit (frameNr-1);
sink2.emit (frameNr-1, buff10);
sink2.emit (frameNr , buff11);
sink1.emit (frameNr-1, buff00);
// that's all for the client
// Verify sane operation....
@ -139,6 +138,7 @@ namespace test {
CHECK (*stream1++ == testData(1,0));
CHECK (*stream1++ == testData(1,1));
CHECK (!stream1);
#if false /////////////////////////////////////////////////////////////////////////////////////////////////////////////UNIMPLEMENTED :: TICKET #819
#endif /////////////////////////////////////////////////////////////////////////////////////////////////////////////UNIMPLEMENTED :: TICKET #819
}
};

View file

@ -0,0 +1,176 @@
/*
MaybeValue(Test) - considerations for dealing with optional values
Copyright (C) Lumiera.org
2011, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* *****************************************************/
#include "lib/test/run.hpp"
#include "lib/test/test-helper.hpp"
#include "lib/maybe.hpp"
#include "lib/util.hpp"
//#include <tr1/functional>
//#include <cstdlib>
namespace lib {
namespace test{
namespace error = lumiera::error;
// using util::isSameObject;
// using std::rand;
using util::isnil;
using error::LUMIERA_ERROR_BOTTOM_VALUE;
namespace { // test data and helpers...
uint INVOCATION_CNT(0);
/** helper for testing delayed evaluation */
template<typename VAL>
class Delayed
{
VAL v_;
public:
Delayed (VAL val) : v_(val) { }
VAL
operator() () const
{
++INVOCATION_CNT;
return v_;
}
};
template<typename VAL>
inline Delayed<VAL>
yield (VAL val)
{
}
}
/***************************************************************************************
* @test Investigate various situations of using a Maybe value or option monad.
* @note this is a testbed for experiments for the time being 11/2011
*
* @see lib::Maybe
* @see null-value-test.cpp
* @see util::AccessCasted
*/
class MaybeValue_test : public Test
{
void
run (Arg)
{
show_basicOperations();
show_delayedAccess();
}
void
show_basicOperations()
{
#if false /////////////////////////////////////////////////////////////////////////////////////////////////////////////UNIMPLEMENTED :: TICKET #856
Maybe<int> one(1);
Maybe<int> opt(5);
Maybe<int> nil;
CHECK (opt); CHECK (!isnil(opt));
CHECK (!nil); CHECK ( isnil(nil));
// access the optional value
CHECK (1 == *one);
CHECK (5 == *opt);
// can't access an bottom value
VERIFY_ERROR (BOTTOM_VALUE, *nil);
// flatMap operation (apply a function)
CHECK (7 == *(opt >>= inc2));
CHECK (9 == *(opt >>= inc2 >>= inc2));
// alternatives
CHECK (1 == *(one || opt));
CHECK (5 == *(nil || opt));
CHECK (1 == *(nil || one || opt));
CHECK (1 == one.get());
CHECK (1 == one.getOrElse(9));
CHECK (9 == nil.getOrElse(9));
#endif /////////////////////////////////////////////////////////////////////////////////////////////////////////////UNIMPLEMENTED :: TICKET #856
}
void
show_delayedAccess()
{
INVOCATION_CNT = 0;
Maybe<int> nil;
Maybe<int> two(2);
#if false /////////////////////////////////////////////////////////////////////////////////////////////////////////////UNIMPLEMENTED :: TICKET #856
Maybe<int(void)> later(yield(5));
CHECK (0 == INVOCATION_CNT);
CHECK (2 == *(two || later));
CHECK (0 == INVOCATION_CNT);
CHECK (5 == *(nil || later));
CHECK (1 == INVOCATION_CNT);
later.get();
CHECK (2 == INVOCATION_CNT);
CHECK (2 == two.getOrElse(later));
CHECK (2 == INVOCATION_CNT);
CHECK (5 == nil.getOrElse(later));
CHECK (3 == INVOCATION_CNT);
// obviously, this also works just with a function
CHECK (7 == nil.getOrElse(yield(7)));
CHECK (4 == INVOCATION_CNT);
// stripping the delayed evaluation
Maybe<int> some = later;
CHECK (5 == INVOCATION_CNT);
CHECK (5 == some);
CHECK (5 == INVOCATION_CNT);
#endif /////////////////////////////////////////////////////////////////////////////////////////////////////////////UNIMPLEMENTED :: TICKET #856
}
};
/** Register this test class... */
LAUNCHER (MaybeValue_test, "unit common");
}} // namespace lib::test

View file

@ -28,7 +28,7 @@
#include "lib/error.hpp"
#include "lib/scoped-holder.hpp"
#include "testdummy.hpp"
#include "lib/test/testdummy.hpp"
#include <boost/noncopyable.hpp>
#include <iostream>
@ -85,20 +85,20 @@ namespace test{
void
checkAllocation()
{
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
{
HO holder;
CHECK (!holder);
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
create_contained_object (holder);
CHECK (holder);
CHECK (false!=holder);
CHECK (holder!=false);
CHECK (0!=checksum);
CHECK (0 != Dummy::checksum());
CHECK ( &(*holder));
CHECK (holder->add(2) == checksum+2);
CHECK (holder->add(2) == 2 + Dummy::checksum());
Dummy *rawP = holder.get();
CHECK (rawP);
@ -111,7 +111,7 @@ namespace test{
TRACE (test, "size(object) = %lu", sizeof(*holder));
TRACE (test, "size(holder) = %lu", sizeof(holder));
}
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
}
@ -119,11 +119,11 @@ namespace test{
void
checkErrorHandling()
{
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
{
HO holder;
throw_in_ctor = true;
Dummy::activateCtorFailure();
try
{
create_contained_object (holder);
@ -131,15 +131,15 @@ namespace test{
}
catch (int val)
{
CHECK (0!=checksum);
checksum -= val;
CHECK (0==checksum);
CHECK (0 != Dummy::checksum());
Dummy::checksum() -= val;
CHECK (0 == Dummy::checksum());
}
CHECK (!holder); /* because the exception happens in ctor
object doesn't count as "created" */
throw_in_ctor = false;
Dummy::activateCtorFailure(false);
}
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
}
@ -147,7 +147,7 @@ namespace test{
void
checkCopyProtocol()
{
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
{
HO holder;
HO holder2 (holder);
@ -158,38 +158,38 @@ namespace test{
CHECK (!holder);
create_contained_object (holder);
CHECK (holder);
long currSum = checksum;
long currSum = Dummy::checksum();
void* adr = holder.get();
VERIFY_ERROR(LOGIC, holder2 = holder );
CHECK (holder);
CHECK (!holder2);
CHECK (holder.get()==adr);
CHECK (checksum==currSum);
CHECK (Dummy::checksum()==currSum);
VERIFY_ERROR(LOGIC, holder = holder2 );
CHECK (holder);
CHECK (!holder2);
CHECK (holder.get()==adr);
CHECK (checksum==currSum);
CHECK (Dummy::checksum()==currSum);
create_contained_object (holder2);
CHECK (holder2);
CHECK (checksum != currSum);
currSum = checksum;
CHECK (Dummy::checksum() != currSum);
currSum = Dummy::checksum();
VERIFY_ERROR(LOGIC, holder = holder2 );
CHECK (holder);
CHECK (holder2);
CHECK (holder.get()==adr);
CHECK (checksum==currSum);
CHECK (Dummy::checksum()==currSum);
VERIFY_ERROR(LOGIC, HO holder3 (holder2) );
CHECK (holder);
CHECK (holder2);
CHECK (checksum==currSum);
CHECK (Dummy::checksum()==currSum);
}
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
}
@ -202,7 +202,7 @@ namespace test{
{
typedef std::map<int,HO> MapHO;
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
{
MapHO maph;
CHECK (isnil (maph));
@ -212,8 +212,8 @@ namespace test{
HO & contained = maph[i];
CHECK (!contained);
} // 100 holder objects created by sideeffect
CHECK (0==checksum); // ..... without creating any contained object!
// ..... without creating any contained object!
CHECK (0 == Dummy::checksum());
CHECK (!isnil (maph));
CHECK (100==maph.size());
@ -224,14 +224,14 @@ namespace test{
CHECK (0 < maph[i]->add(12));
}
CHECK (100==maph.size());
CHECK (0!=checksum);
CHECK (0 != Dummy::checksum());
long value55 = maph[55]->add(0);
long currSum = checksum;
long currSum = Dummy::checksum();
CHECK (1 == maph.erase(55));
CHECK (checksum == currSum - value55); // proves object#55's dtor has been invoked
CHECK (Dummy::checksum() == currSum - value55); // proves object#55's dtor has been invoked
CHECK (maph.size() == 99);
maph[55]; // create new empty holder by sideeffect...
@ -239,7 +239,7 @@ namespace test{
CHECK (!maph[55]);
CHECK (maph.size() == 100);
}
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
}

View file

@ -27,7 +27,7 @@
#include "lib/scoped-holder.hpp"
#include "lib/scoped-holder-transfer.hpp"
#include "testdummy.hpp"
#include "lib/test/testdummy.hpp"
#include <iostream>
#include <vector>
@ -125,17 +125,17 @@ namespace test {
void
buildVector()
{
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
{
typedef typename Table<HO>::Type Vect;
Vect table(50);
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
for (uint i=0; i<10; ++i)
create_contained_object (table[i]);
CHECK (0 < checksum);
CHECK (0 < Dummy::checksum());
CHECK ( table[9]);
CHECK (!table[10]);
@ -145,7 +145,7 @@ namespace test {
CHECK (rawP == &(*table[5]));
CHECK (rawP->add(-555) == table[5]->add(-555));
}
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
}
@ -153,29 +153,29 @@ namespace test {
void
growVector()
{
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
{
typedef typename Table<HO>::Type Vect;
Vect table;
table.reserve(2);
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
cout << ".\n..install one element at index[0]\n";
table.push_back(HO());
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
create_contained_object (table[0]); // switches into "managed" state
CHECK (0 < checksum);
int theSum = checksum;
CHECK (0 < Dummy::checksum());
int theSum = Dummy::checksum();
cout << ".\n..*** resize table to 16 elements\n";
for (uint i=0; i<15; ++i)
table.push_back(HO());
CHECK (theSum==checksum);
CHECK (theSum == Dummy::checksum());
}
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
}
@ -183,37 +183,37 @@ namespace test {
void
checkErrorHandling()
{
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
{
typedef typename Table<HO>::Type Vect;
Vect table(5);
table.reserve(5);
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
create_contained_object (table[2]);
create_contained_object (table[4]);
CHECK (0 < checksum);
int theSum = checksum;
CHECK (0 < Dummy::checksum());
int theSum = Dummy::checksum();
cout << ".\n.throw some exceptions...\n";
throw_in_ctor = true;
Dummy::activateCtorFailure();
try
{
create_contained_object (table[3]);
NOTREACHED ();
NOTREACHED ("ctor should throw");
}
catch (int val)
{
CHECK (theSum < checksum);
checksum -= val;
CHECK (theSum==checksum);
CHECK (theSum < Dummy::checksum());
Dummy::checksum() -= val;
CHECK (theSum == Dummy::checksum());
}
CHECK ( table[2]);
CHECK (!table[3]); // not created because of exception
CHECK ( table[4]);
throw_in_ctor = false;
Dummy::activateCtorFailure(false);
throw_in_transfer=true; // can do this only when using ScopedHolder
try
{
@ -223,10 +223,10 @@ namespace test {
{
CHECK ( table.size() < 10);
}
CHECK (theSum == checksum);
CHECK (theSum == Dummy::checksum());
throw_in_transfer=false;
}
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
}
};

View file

@ -27,7 +27,7 @@
#include "lib/util.hpp"
#include "lib/scoped-ptrvect.hpp"
#include "testdummy.hpp"
#include "lib/test/testdummy.hpp"
namespace lib {
@ -43,7 +43,9 @@ namespace test{
/********************************************************************
* @test ScopedPtrVect manages the lifecycle of a number of objects.
* @todo implement detaching of objects
* The API is similar to a vector and allows for element access
* and iteration. Individual elements can be detached and thus
* removed from the responsibility of the container.
*/
class ScopedPtrVect_test : public Test
{
@ -53,7 +55,7 @@ namespace test{
{
simpleUsage();
iterating();
// detaching();
detaching();
}
@ -61,16 +63,16 @@ namespace test{
void
simpleUsage()
{
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
{
VectD holder;
CHECK (isnil (holder));
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
Dummy* ptr = new Dummy();
Dummy& ref = holder.manage (ptr);
CHECK (!isnil (holder));
CHECK (0!=checksum);
CHECK (0 != Dummy::checksum());
CHECK (&ref==ptr);
holder.manage (new Dummy);
@ -78,7 +80,7 @@ namespace test{
CHECK (3 == holder.size());
holder.clear();
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
CHECK (isnil (holder));
holder.manage (new Dummy);
@ -91,16 +93,16 @@ namespace test{
holder.manage (new Dummy);
holder.manage (new Dummy);
CHECK (9 == holder.size());
CHECK (0!=checksum);
CHECK (0 < Dummy::checksum());
}
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
}
void
iterating()
{
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
{
VectD holder;
for (int i=0; i<16; ++i)
@ -140,11 +142,49 @@ namespace test{
VERIFY_ERROR (ITER_EXHAUST, ++cii );
}
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
}
void
detaching()
{
int id2, id3;
Dummy* extracted(0);
CHECK (0 == Dummy::checksum());
{
VectD holder;
CHECK (0 == Dummy::checksum());
CHECK (isnil (holder));
holder.manage (new Dummy);
holder.manage (new Dummy);
holder.manage (new Dummy);
holder.manage (new Dummy);
holder.manage (new Dummy);
CHECK (5 == holder.size());
CHECK (0 < Dummy::checksum());
id2 = holder[2].getVal();
id3 = holder[3].getVal();
extracted = holder.detach(& holder[2]);
CHECK (id2 == extracted->getVal());
CHECK (id3 == holder[2].getVal());
CHECK (4 == holder.size());
}
CHECK (0 < Dummy::checksum()); // not all dummies are dead
CHECK (id2 == Dummy::checksum()); // #2 is alive!
extracted->setVal(id2+id3);
CHECK (id2+id3 == Dummy::checksum());
delete extracted;
CHECK (0 == Dummy::checksum());
}
};
LAUNCHER (ScopedPtrVect_test, "unit common");

View file

@ -1,87 +0,0 @@
/*
TESTDUMMY.hpp - yet another test dummy for tracking ctor/dtor calls
Copyright (C) Lumiera.org
2008, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* *****************************************************/
#include <boost/noncopyable.hpp>
#include <algorithm>
namespace lib {
namespace test{
namespace { // yet another test dummy
long checksum = 0;
bool throw_in_ctor = false;
class Dummy
: boost::noncopyable
{
int val_;
public:
Dummy ()
: val_(1 + (rand() % 100000000))
{ init(); }
Dummy (int v)
: val_(v)
{ init(); }
~Dummy()
{
checksum -= val_;
}
long add (int i) { return val_+i; }
int getVal() const { return val_; }
void
setVal (int newVal)
{
checksum += newVal - val_;
val_ = newVal;
}
friend void
swap (Dummy& dum1, Dummy& dum2) ///< checksum neutral
{
std::swap(dum1.val_, dum2.val_);
}
private:
void
init()
{
checksum += val_;
if (throw_in_ctor)
throw val_;
}
};
} // anonymous test dummy
}} // namespace lib::test

View file

@ -25,7 +25,7 @@
#include "lib/test/run.hpp"
#include "lib/scoped-holder-transfer.hpp"
#include "testdummy.hpp"
#include "lib/test/testdummy.hpp"
#include <iostream>
#include <vector>
@ -130,27 +130,27 @@ namespace test {
cout << "\n..setup table space for 2 elements\n";
TransDummyVector table;
table.reserve(2);
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
cout << "\n..install one element at index[0]\n";
table.push_back(TransDummy());
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
table[0].setup(); // switches into "managed" state
CHECK (0 < checksum);
int theSum = checksum;
CHECK (0 < Dummy::checksum());
int theSum = Dummy::checksum();
cout << "\n..*** resize table to 5 elements\n";
table.resize(5);
CHECK (theSum==checksum);
CHECK (theSum==Dummy::checksum());
cout << "\n..install another element\n";
table[3].setup(375);
CHECK (theSum+375==checksum);
CHECK (theSum+375==Dummy::checksum());
cout << "\n..kill all elements....\n";
table.clear();
CHECK (0==checksum);
CHECK (0 == Dummy::checksum());
}
};

View file

@ -1133,8 +1133,8 @@ Beyond that, it can be necessary to associate at least a state flag with //indiv
__Note__: while the API to access this service is uniform, conceptually there is a difference between just using the (shared) type information and associating individual metadata, like the buffer state. Type-~IDs, once allocated, will never be discarded (within the lifetime of an Lumiera application instance -- buffer associations aren't persistent). To the contrary, individual metadata //will be discarded,// when releasing the corresponding buffer. According to the ''prototype pattern'', individual metadata is treated as a one-way-off specialisation.
</pre>
</div>
<div title="BufferProvider" modifier="Ichthyostega" modified="201109232347" created="201107082330" tags="Rendering spec draft" changecount="20">
<pre>It turns out that -- throughout the render engine implementation -- we never need direct access to the buffers holding media data. Buffers are just some entity to be //managed,// i.e. &quot;allocated&quot;, &quot;locked&quot; and &quot;released&quot;; the //actual meaning of these operations can be left to the implementation.// The code within the render engine just pushes around ''smart-prt like handles''. These [[buffer handles|BuffHandle]] act as a front-end, being created by and linked to a buffer provider implementation. There is no need to manage the lifecycle of buffers automatically, because the use of buffers is embedded into the render calculation cycle, which follows a rather strict protocol anyway. Relying on the [[capabilities of the scheduler|SchedulerRequirements]], the sequence of individual jobs in the engine ensures...
<div title="BufferProvider" modifier="Ichthyostega" modified="201111192223" created="201107082330" tags="Rendering spec draft" changecount="22">
<pre>It turns out that -- throughout the render engine implementation -- we never need direct access to the buffers holding actual media data. Buffers are just some entity to be //managed,// i.e. &quot;allocated&quot;, &quot;locked&quot; and &quot;released&quot;; the //actual meaning of these operations can be left to the implementation.// The code within the render engine just pushes around ''smart-prt like handles''. These [[buffer handles|BuffHandle]] act as a front-end, being created by and linked to a buffer provider implementation. There is no need to manage the lifecycle of buffers automatically, because the use of buffers is embedded into the render calculation cycle, which follows a rather strict protocol anyway. Relying on the [[capabilities of the scheduler|SchedulerRequirements]], the sequence of individual jobs in the engine ensures...
* that the availability of a buffer was ensured prior to planning a job (&quot;buffer allocation&quot;)
* that a buffer handle was obtained (&quot;locked&quot;) prior to any operation requiring a buffer
* that buffers are marked as free (&quot;released&quot;) after doing the actual calculations.
@ -1142,7 +1142,7 @@ __Note__: while the API to access this service is uniform, conceptually there is
!operations
While BufferProvider is an interface meant to be backed by various different kinds of buffer and memory management approaches, there is a common set of operations to be supported by any of them
;announcing
:client code may announce beforehand that it expects to get a certain amount of buffers. Usually this causes some allocations to happen right away, or it might trigger similar mechanisms to ensure availability; the BufferProvider will then return the actual number of buffers guaranteed to be available. This announcing step is optional an can happen any time before or even after using the buffers and it can be repeated with different values to adjust to changing requirements. (Currently 9/2011 this is meant to be global for the whole BufferProvider, but it might happen that we need to break that down to individual clients)
:client code may announce beforehand that it expects to get a certain amount of buffers. Usually this causes some allocations to happen right away, or it might trigger similar mechanisms to ensure availability; the BufferProvider will then return the actual number of buffers guaranteed to be available. This announcing step is optional an can happen any time before or even after using the buffers and it can be repeated with different values to adjust to changing requirements. Thus the announced amount of buffers always denotes //additional buffers,// on top of what is actively used at the moment. This safety margin of available buffers usually is accounted separately for each distinct kind of buffer (buffer type). There is no tracking as to which specific client requested buffers, beyond the buffer type.
;locking
:this operation actually makes a buffer available for a specific client and returns a [[buffer handle|BuffHandle]]. The corresponding buffer is marked as used and can't be locked again unless released. If necessary, at that point the BufferProvider might allocate memory to accommodate (especially when the buffers weren't announced beforehand). The locking may fail and raise an exception. You may expect failure to be unlikely when buffers have been //announced beforehand.// To support additional sanity checks, the client may provide a token-ID with the lock-operation. This token may be retrieved later and it may be used to ensure the buffer is actually locked for //this token.//
;attaching
@ -2358,19 +2358,21 @@ Finally, this example shows an ''automation'' data set controlling some paramete
</pre>
</div>
<div title="ImplementationGuidelines" modifier="Ichthyostega" modified="201011190306" created="200711210531" tags="Concepts design discuss" changecount="17">
<div title="ImplementationGuidelines" modifier="Ichthyostega" modified="201111261555" created="200711210531" tags="Concepts design discuss" changecount="19">
<pre>!Observations, Ideas, Proposals
''this page is a scrapbook for collecting ideas'' &amp;mdash; please don't take anything noted here too literal. While writing code, I observe that I (ichthyo) follow certain informal guidelines, some of which I'd like to note down because they could evolve into general style guidelines for the Proc-Layer code.
* ''Inversion of Control'' is the leading design principle.
* but deliberately we stay just below the level of using Dependency Injection. Singletons and call-by-name are good enough. We're going to build //one// application, not any conceivable application.
* write error handling code only if the error situation can be actually //handled// at this place. Otherwise, be prepared for exceptions just passing by and thus handle any resources by &quot;resource acquisition is initialisation&quot; (RAII). Remember: error handling defeats decoupling and encapsulation
* write error handling code only if the error situation can be actually //handled// at this place. Otherwise, be prepared for exceptions just passing by and thus handle any resources by &quot;resource acquisition is initialisation&quot; (RAII). Remember: error handling defeats decoupling and encapsulation.
* (almost) never {{{delete}}} an object directly, use {{{new}}} only when some smart pointer is at hand.
* when user/client code is intended to create objects, make the ctor protected and provide a factory member called {{{create}}} instead, returning a smart pointer
* clearly distinguish ''value objects'' from objects with ''reference semantics'', i.e. objects having a distict //object identity.//
* when user/client code is intended to create reference-semantics objects, make the ctor protected and provide a factory member called {{{create}}} instead, returning a smart pointer
* similarly, when we need just one instance of a given service, make the ctor protected and provide a factory member called {{{instance}}}, to be implemented by the lumiera::[[Singleton]] factory.
* whenever possible, prefer this (lazy initialised [[Singleton]]) approach and avoid static initialisation magic
* avoid doing anything non-local during the startup phase or shutdown phase of the application, especially avoid doing substantial work in any dtor.
* avoid asuming anything that can't be enforced by types, interfaces or signatures; this means: be prepared for open possibilities
* prefer {{{const}}} and initialisation code over assignment and active changes (inspired by functional programming)
* code is written for ''being read by humans''; code shall convey its meaning //even to the casual reader.//
</pre>
</div>
<div title="InlineJavaScript" modifier="Jeremy" created="200603090618" tags="systemConfig" server.type="file" server.host="file:///home/ct/.homepage/home.html" server.page.revision="200603090618">
@ -3337,7 +3339,7 @@ While actually data frames are //pulled,// on a conceptual level data is assumed
As both of these specifications are given by [[Pipe]]-~IDs, the actual designation information may be reduced. Much can be infered from the circumstances, because any pipe includes a StreamType, and an output designation for an incompatible stream type is irrelevant. (e.g. and audio output when the pipe currently in question deals with video)
</pre>
</div>
<div title="OutputManagement" modifier="Ichthyostega" modified="201108251240" created="201007090155" tags="Model Rendering Player spec draft" changecount="37">
<div title="OutputManagement" modifier="Ichthyostega" modified="201111040000" created="201007090155" tags="Model Rendering Player spec draft" changecount="38">
<pre>//writing down some thoughts//
* ruled out the system outputs as OutputDesignation.
@ -3366,12 +3368,12 @@ Any external output sink is managed as a [[slot|DisplayerSlot]] in the ~OutputMa
&amp;rarr; see also the PlayService
!the global output manager
While within the model routing is done mostly just by referring to an OutputDesignation, at some point we need to map these abstract designations to real output capabilities. This OutputManager interface exposes these mappings and allows to control and manage them. Several elements within the application, most notably the [[viewers|ViewerAsset]], provide an implementation of this interface -- yet there is one primary implementation, the ''global output manager'', known as OutputDirector. It can be accessed through the {{{Output}}} façade interface and is the final authority when it comes to allocating an mapping of real output possibilities. The OutputDirector tracks all the OutputSlot elements currently installed and available for output.
Within the model routing is done mostly just by referring to an OutputDesignation -- but at some point finally we need to map these abstract designations to real output capabilities. This happens at the //output managing elements.// This interface, OutputManager, exposes these mappings of logical to real outputs and allows to manage and control them. Several elements within the application, most notably the [[viewers|ViewerAsset]], provide an implementation of this interface -- yet there is one primary implementation, the ''global output manager'', known as OutputDirector. It can be accessed through the {{{Output}}} façade interface and is the final authority when it comes to allocating and mapping of real output possibilities. The OutputDirector tracks all the OutputSlot elements currently installed and available for output.
The relation between the central OutputDirector and the peripheral OutputManager implementations is hierarchical. Because output slots are usually registered rather at some peripheral output manager implementation, a direct mapping from OutputDesignation (i.e. global pipe) to these slots is created foremost at that peripheral level. Resolving a global pipe into an output slot is the core concern of any OutputManager implementation. Thus, when there is a locally preconfigured mapping, like e.g. for a viewer's video master pipe to the output slot installed by the corresponding GUI viewer element, then this mapping will picked up foremost to resolve the video master output.
The relation between the central OutputDirector and the peripheral OutputManager implementations is hierarchical. Because output slots are usually registered rather at some peripheral output manager implementation, a direct mapping from OutputDesignation (i.e. global pipe) to these slots is created foremost at that peripheral level. Resolving a global pipe into an output slot is the core concern of any OutputManager implementation. Thus, when there is a locally preconfigured mapping, like e.g. for a viewer's video master pipe to the output slot installed by the corresponding GUI viewer element, then this mapping will be picked up foremost to resolve the video master output.
For a viewer widget in the GUI this yields exactly the expeted behaviour, but in other cases, e.g. for sound output, we need more general, more globally scoped output slots. In these cases, when a local mapping is absent, the query for output resolution is passed on up to the OutputDirector, drawing on the collection of globally available output slots for that specific kind of media.
{{red{Question: is it possible to retrieve a slot from another peripheral node?}}}
{{red{open question 11/11: is it possible to retrieve a slot from another peripheral node?}}}
</pre>
</div>
<div title="OutputManager" modifier="Ichthyostega" modified="201106212317" created="201106122359" tags="Player Model def" changecount="7">
@ -3423,33 +3425,34 @@ Thus the mapping is a copyable value object, based on a associative array. It ma
First and foremost, mapping can be seen as a //functional abstraction.// As it's used at implementation level, encapsulation of detail types in't the primary concern, so it's a candidate for generic programming: For each of those use cases outlined above, a distinct mapping type is created by instantiating the {{{OutputMapping&lt;DEF&gt;}}} template with a specifically tailored definition context ({{{DEF}}}), which takes on the role of a strategy. Individual instances of this concrete mapping type may be default created and copied freely. This instantiation process includes picking up the concrete result type and building a functor object for resolving on the fly. Thus, in the way typical for generic programming, the more involved special details are moved out of sight, while being still in scope for the purpose of inlining. But there //is// a concern better to be encapsulated and concealed at the usage site, namely accessing the rules system. Thus mapping leads itself to the frequently used implementation pattern where there is a generic frontend as header, calling into opaque functions embedded within a separate compilation unit.
</pre>
</div>
<div title="OutputSlot" modifier="Ichthyostega" modified="201109021531" created="201106162339" tags="def Concepts Player spec" changecount="27">
<div title="OutputSlot" modifier="Ichthyostega" modified="201111042355" created="201106162339" tags="def Concepts Player spec" changecount="40">
<pre>Within the Lumiera player and output subsystem, actually sending data to an external output requires to allocate an ''output slot''
This is the central metaphor for the organisation of actual (system level) outputs; using this concept allows to separate and abstract the data calculation and the organisation of playback and rendering from the specifics of the actual output sink. Actual output possibilities can be added and removed dynamically from various components (backend, GUI), all using the same resolution and mapping mechanisms (&amp;rarr; OutputManagement)
This is the central metaphor for the organisation of actual (system level) outputs; using this concept allows to separate and abstract the data calculation and the organisation of playback and rendering from the specifics of the actual output sink. Actual output possibilities (video in GUI window, video fullscreen, sound, Jack, rendering to file) can be added and removed dynamically from various components (backend, GUI), all using the same resolution and mapping mechanisms (&amp;rarr; OutputManagement)
!Properties of an output slot
Each OutputSlot is an unique and distinguishable entity. It corresponds explicitly to an external output, or a group of such outputs (e.g. left and right soundcard output channels), or an output file or similar capability accepting media content. First off, an output slot needs to be provided, configured and registered, using an implementation for the kind of media data to be output (sound, video) and the special circumstances of the output capability (render a file, display video in a GUI widget, send video to a full screen display, establish a Jack port, just use some kind of &quot;sound out&quot;). An output slot is always limited to a single kind of media, and to a single connection unit, but this connection may still be comprised of multiple channels (stereoscopic video, multichannel sound).
In order to be usable as //output sink,// an output slot needs to be //allocated,// i.e. tied to and locked for a specific client. At any time, there may be only a single client using a given output slot this way. To stress this point: output slots don't provide any kind of inherent mixing capability; any adaptation, mixing, overlaying and sharing needs to be done within the nodes network producing the output data fed to the slot. (yet some special kinds of external output capabilities -- e.g. the Jack audio connection system -- may still provide additional mixing capabilities, but that's beyond the scope of the Lumiera application)
In order to be usable as //output sink,// an output slot needs to be //allocated,// i.e. tied to and locked for a specific client. At any time, there may be only a single client using a given output slot this way. To stress this point: output slots don't provide any kind of inherent mixing capability; any adaptation, mixing, overlaying and sharing needs to be done within the nodes network producing the output data fed to the slot. (in special cases, some external output capabilities -- e.g. the Jack audio connection system -- may still provide additional mixing capabilities, but that's beyond the scope of the Lumiera application)
Once allocated, the output slot returns a set of concrete ''sink handles'' (one for each physical channel expecting data). The calculating process feeds its results into those handles. Size and other characteristics of the data frames is assumed to be suitable, which typically won't be verified at that level anymore (but the sink handle provides a hook for assertions). Besides that, the allocation of an output slot reveals detailed ''timing expectations''. The client is required to comply to these timings when ''emitting'' data -- he's even required to provide a //current time specification,// alongside with the data. Yet the output slot has the ability to handle timing failures gracefully; the concrete output slot implementation is expected to provide some kind of de-click or de-flicker facility, which kicks in automatically when a timing failure is detected.
Once allocated, the output slot returns a set of concrete ''sink handles'' (one for each physical channel expecting data). The calculating process feeds its results into those handles. Size and other characteristics of the data frames are assumed to be suitable, which typically won't be verified at that level anymore (but the sink handle provides a hook for assertions). Besides that, the allocation of an output slot reveals detailed ''timing expectations''. The client is required to comply to these timings when ''emitting'' data -- he's even required to provide a //current time specification,// alongside with the data. Yet the output slot has the ability to handle timing failures gracefully; the concrete output slot implementation is expected to provide some kind of de-click or de-flicker facility, which kicks in automatically when a timing failure is detected.
!!!data exchange models
Data is handed over by the client invoking an {{{emit(time,...)}}} function on the sink handle. Theoretically there are two different models how this data hand-over might be performed. This corresponds to the fact, that in some cases our own code manages the output and the buffers, while in other situations we intend to use existing library solutions or even external server applications to handle output
;buffer handover model
:the client owns the data buffer and cares for allocation and de-allocation. The {{{emit()}}}-call just propagates a pointer to the buffer holding the data ready for output. The output slot implementation in turn has the liability to copy or otherwise use this data within a given time limit.
;shared buffer model
:here the output mechanism owns the buffer. Within a certain time window prior to the expected time of the {{{emit()}}}-call, the client may obtain this buffer (pointer) to fill in the data. The slot implementation won't touch this buffer until the {{{emit()}}} handover, which in this case just provides the time and states that the client is done with that buffer. If the data emitting handshake doesn't happen at all, it counts as late and superseded by the next handshake.
:here the output mechanism owns the buffer. Within a certain time window prior to the expected time of the {{{emit()}}}-call, the client may obtain this buffer (pointer) to fill in the data. The slot implementation won't touch this buffer until the {{{emit()}}} handover, which in this case just provides the time and signalles the client is done with that buffer. If the data emitting handshake doesn't happen at all, it counts as late and superseded by the next handshake.
!!!timing expectations
Besides the sink handles, allocation of an output slot defines some timing constraints, which are binding for the client. These timings are detailed and explicit, including a grid of deadlines for each frame to deliver, plus a fixed //latency.// Within this context, &amp;raquo;latency&amp;laquo; means the requirement to be ahead of the nominal time by a certain amount, to compensate for the processing time necessary to propagate the media to the physical output pin. The output slot implementation itself is bound by external constraints to deliver data at a fixed framerate and aligned to an externally defined timing grid, plus the data needs to be handed over ahead of these time points by an time amount given by the latency. Depending on the data exchange model, there is an additional time window limiting the buffer management.
The assumption is for the client to have elaborate timing capabilities at his disposal. More specifically, the client is a job running within the engine scheduler and thus can be configured to run //after// another job has finished, and to run within certain time limits. Thus the client is able to provide a //current nominal time// -- which is suitably close to the actual wall clock time. The output slot implementation can be written such as to work out from this time specification if the call is timely or overdue -- and react accordingly.
The assumption is for the client to have elaborate timing capabilities at his disposal. More specifically, the client is assumed to be a job running within the engine scheduler and thus can be configured to run //after// another job has finished, and to run within certain time limits. Thus the client is able to provide a //current nominal time// -- which is suitably close to the actual wall clock time. The output slot implementation can be written such as to work out from this time specification if the call is timely or overdue -- and react accordingly.
{{red{TODO 6/11}}}in this spec, both data exchange models exhibit a weakness regarding the releasing of buffers. At which time is it safe to release a buffer, when the handover didn't happen? Do we need an explicit callback, and how could this callback be triggered? This is similar to the problem of closing a network connection, i.e. the problem is generally unsolvable, but can be handled pragmatically within certain limits.
{{red{WIP 11/11}}}meanwhile I've worked out the BufferProvider interface in deail. There's now a deatiled buffer handover protocol defined, which is supported by a little state engine tracking BufferMetadata. This mechanism provides a transition to {{{BLOCKED}}} state when order or timing constraints are being violated, which practically solves this problem. How to detect and resolve such a floundered state from the engine point of view still remains to be addressed.
!!!Lifecycle and storage
The concrete OutputSlot implementation is owned and managed by the facility actually providing this output possibility. For example, the GUI provides viewer widgets, while some sound output backend provides sound ports. This implementation object is required to stay alive as long as it's registered with some OutputManager. It needs to be deregistered explicitly prior to destruction -- and this deregistration may block until all clients using this slot are terminated. Beyond that, an output slot implementation is expected to handle all kinds of failures gracefully -- preferrably just emitting a signal (callback functor).
The concrete OutputSlot implementation is owned and managed by the facility actually providing the output possibility in question. For example, the GUI provides viewer widgets, while some sound output backend provides sound ports. The associated OutputSlot implementation object is required to stay alive as long as it's registered with some OutputManager. It needs to be de-registered explicitly prior to destruction -- and this deregistration may block until all clients using this slot did terminate. Beyond that, an output slot implementation is expected to handle all kinds of failures gracefully -- preferably just emitting a signal (callback functor).
{{red{TODO 7/11: Deregistration is an unsolved problem....}}}
-----
@ -3466,9 +3469,9 @@ Solving this problem through //generic programming// -- i.e coding both cases ef
;unified
:extend and adapt the protocol such to make both models similar; concentrate all differences //within a separate buffer provider.//
!!!discussion
the generic approach looks as it's becoming rather convoluted in practice. We'd need to hand over additional parameters to the factory, which passes them through to the actual job implementation created. And there would be a coupling between slot and job (the slot is aware it's going to be used by a job, and even provides the implementation). Obviously, a benefit is that the actual code path executed within the job is without indirections, and all written down in a single location. Another benefit is the possibility to extend this approach to cover further buffer handling models -- it doesn't pose any requirements on the structure of the buffer handling.
If we accept to retrieve the buffer(s) via an indirection, which we kind of do anyway //within the render node implementation// -- the unified model looks more like a clean solution. It's more like doing away with some local optimisations possible if we handle the models explicitly, so it's not much of a loss, given that the majority of the processing time will be spent within the inner pixel calculation loops for frame processing anyway. When following this approach, the BufferProvider becomes a third, independent partner, and the slot cooperates tightly with this buffer provider, while the client (processing node) still just talks to the slot. Basically, this unified solution is like extending the shared buffer model to both cases.
&amp;rArr; conclusion: go for the unified approach!
the generic approach looks as it's becoming rather convoluted in practice. We'd need to hand over additional parameters to the factory, which passes them through to the actual job implementation created. And there would be a coupling between slot and job (the slot is aware it's going to be used by a job, and even provides the implementation). Obviously, a benefit is that the actual code path executed within the job is without indirections, and all written down in a single location. Another benefit is the possibility to extend this approach to cover further buffer handling models -- it doesn't pose any requirements on the structure of the buffer handling --
On the other hand, if we accept to retrieve the buffer(s) via an indirection, which we kind of do anyway //within the render node implementation// -- the unified model looks more like a clean solution. It's more like doing away with some local optimisations possible if we handle the models explicitly, so it's not much of a loss, given that the majority of the processing time will be spent within the inner pixel calculation loops for frame processing anyway. When following this approach, the BufferProvider becomes a third, independent partner, and the slot cooperates tightly with this buffer provider, while the client (processing node) still just talks to the slot. Basically, this unified solution works like extending the shared buffer model to both cases.
&amp;rArr; __conclusion__: go for the unified approach!
!!!unified data exchange cycle
The planned delivery time of a frame is used as an ID throughout that cycle