unit test to cover thread-local helper

This commit is contained in:
Fischlurch 2011-12-23 05:00:27 +01:00
parent 2bdf06829a
commit 87f7a8f6e8
5 changed files with 122 additions and 4 deletions

View file

@ -188,7 +188,7 @@ namespace backend {
/** @note by design there is no possibility to find out
* just based on the thread handle, if the thread is alive.
* just based on the thread handle if some thread is alive.
* We define our own accounting here based on the internals
* of the thread wrapper. This will break down, if you mix
* uses of the C++ wrapper with the raw C functions. */

View file

@ -95,7 +95,7 @@ namespace lib {
private:
TAR*
accessChecked()
accessChecked() const
{
TAR *p(get());
if (!p)

View file

@ -648,6 +648,11 @@ return: 0
END
TEST "Wrapper thread-local pointers" ThreadLocal_test <<END
return: 0
END
TEST "Wait/Notify on Object Monitor" SyncWaiting_test <<END
return: 0
END

View file

@ -0,0 +1,112 @@
/*
ThreadLocal(Test) - verify wrapper for using thread-local data
Copyright (C) Lumiera.org
2011, Hermann Vosseler <Ichthyostega@web.de>
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License as
published by the Free Software Foundation; either version 2 of
the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
* *****************************************************/
#include "lib/test/run.hpp"
#include "backend/thread-wrapper.hpp"
#include "lib/thread-local.hpp"
#include <cstdlib>
using test::Test;
using backend::ThreadJoinable;
using std::rand;
namespace lib {
namespace test{
namespace { // private test data...
const uint NUM_THREADS = 100;
const uint MAX_RAND = 5*1000*1000;
ThreadLocalPtr<uint> privateValue;
struct TestThread
: ThreadJoinable
{
TestThread()
: ThreadJoinable("test Thread-local storage"
,verifyThreadLocal)
{ }
/** the actual test operation running in a separate thread */
static void
verifyThreadLocal()
{
uint secret (1 + rand() % MAX_RAND);
privateValue.set (&secret);
usleep (secret); // sleep for a random period
if (secret != *privateValue)
throw error::Fatal ("thread-local value access broken");
}
};
} // (End) test data....
/**************************************************************************
* @test use a wrapper to simplify handling of thread-local data.
* Create some threads, each referring to another piece of data
* through the "same" wrapper instance.
*
* @see backend::Thread
* @see lib::ThreadLocal
*/
class ThreadLocal_test : public Test
{
virtual void
run (Arg)
{
TestThread testcase[NUM_THREADS] SIDEEFFECT;
for (uint i=0; i < NUM_THREADS; ++i)
CHECK (testcase[i].join().isValid() );
}
};
/** Register this test class... */
LAUNCHER (ThreadLocal_test, "function common");
}} // namespace backend::test

View file

@ -3425,7 +3425,7 @@ Thus the mapping is a copyable value object, based on a associative array. It ma
First and foremost, mapping can be seen as a //functional abstraction.// As it's used at implementation level, encapsulation of detail types in't the primary concern, so it's a candidate for generic programming: For each of those use cases outlined above, a distinct mapping type is created by instantiating the {{{OutputMapping&lt;DEF&gt;}}} template with a specifically tailored definition context ({{{DEF}}}), which takes on the role of a strategy. Individual instances of this concrete mapping type may be default created and copied freely. This instantiation process includes picking up the concrete result type and building a functor object for resolving on the fly. Thus, in the way typical for generic programming, the more involved special details are moved out of sight, while being still in scope for the purpose of inlining. But there //is// a concern better to be encapsulated and concealed at the usage site, namely accessing the rules system. Thus mapping leads itself to the frequently used implementation pattern where there is a generic frontend as header, calling into opaque functions embedded within a separate compilation unit.
</pre>
</div>
<div title="OutputSlot" modifier="Ichthyostega" modified="201111042355" created="201106162339" tags="def Concepts Player spec" changecount="40">
<div title="OutputSlot" modifier="Ichthyostega" modified="201112230113" created="201106162339" tags="def Concepts Player spec img" changecount="52">
<pre>Within the Lumiera player and output subsystem, actually sending data to an external output requires to allocate an ''output slot''
This is the central metaphor for the organisation of actual (system level) outputs; using this concept allows to separate and abstract the data calculation and the organisation of playback and rendering from the specifics of the actual output sink. Actual output possibilities (video in GUI window, video fullscreen, sound, Jack, rendering to file) can be added and removed dynamically from various components (backend, GUI), all using the same resolution and mapping mechanisms (&amp;rarr; OutputManagement)
@ -3434,7 +3434,8 @@ Each OutputSlot is an unique and distinguishable entity. It corresponds explicit
In order to be usable as //output sink,// an output slot needs to be //allocated,// i.e. tied to and locked for a specific client. At any time, there may be only a single client using a given output slot this way. To stress this point: output slots don't provide any kind of inherent mixing capability; any adaptation, mixing, overlaying and sharing needs to be done within the nodes network producing the output data fed to the slot. (in special cases, some external output capabilities -- e.g. the Jack audio connection system -- may still provide additional mixing capabilities, but that's beyond the scope of the Lumiera application)
Once allocated, the output slot returns a set of concrete ''sink handles'' (one for each physical channel expecting data). The calculating process feeds its results into those handles. Size and other characteristics of the data frames are assumed to be suitable, which typically won't be verified at that level anymore (but the sink handle provides a hook for assertions). Besides that, the allocation of an output slot reveals detailed ''timing expectations''. The client is required to comply to these timings when ''emitting'' data -- he's even required to provide a //current time specification,// alongside with the data. Yet the output slot has the ability to handle timing failures gracefully; the concrete output slot implementation is expected to provide some kind of de-click or de-flicker facility, which kicks in automatically when a timing failure is detected.
[&gt;img[Outputslot implementation structures|uml/fig151685.png]]
Once allocated, the output slot returns a set of concrete ''sink handles'' (one for each physical channel expecting data). The calculating process feeds its results into those handles. Size and other characteristics of the data frames are assumed to be suitable, which typically won't be verified at that level anymore (but the sink handle provides a hook for assertions). Besides that, the allocation of an output slot reveals detailed ''timing expectations''. The client is required to comply to these timings when ''emitting'' data -- he's even required to provide a //current time specification,// alongside with the data. Based on this information, the output slot has the ability to handle timing failures gracefully; the concrete output slot implementation is expected to provide some kind of de-click or de-flicker facility, which kicks in automatically when a timing failure is detected.
!!!data exchange models
Data is handed over by the client invoking an {{{emit(time,...)}}} function on the sink handle. Theoretically there are two different models how this data hand-over might be performed. This corresponds to the fact, that in some cases our own code manages the output and the buffers, while in other situations we intend to use existing library solutions or even external server applications to handle output