From 7a250ca9e51f3192c16bb32f659d78b0775166d1 Mon Sep 17 00:00:00 2001 From: Ichthyostega Date: Sat, 24 Mar 2018 11:02:22 +0100 Subject: [PATCH] DI: benchmark atomic locking --- research/try.cpp | 2 +- wiki/renderengine.html | 17 +++++++++--- wiki/thinkPad.ichthyo.mm | 58 ++++++++++++++++++++++------------------ 3 files changed, 46 insertions(+), 31 deletions(-) diff --git a/research/try.cpp b/research/try.cpp index fa635c45a..bf49a0c96 100644 --- a/research/try.cpp +++ b/research/try.cpp @@ -106,7 +106,7 @@ main (int, char**) { 0 == mystery().readMe(); } - ,1000000000) + ,5000000000) << endl; LifecycleHook::trigger (ON_GLOBAL_SHUTDOWN); diff --git a/wiki/renderengine.html b/wiki/renderengine.html index 994677b32..6295a2cc6 100644 --- a/wiki/renderengine.html +++ b/wiki/renderengine.html @@ -1927,7 +1927,7 @@ As we don't have a Prolog interpreter on board yet, we utilize a mock store with {{{default(Obj)}}} is a predicate expressing that the object {{{Obj}}} can be considered the default setup under the given conditions. Using the //default// can be considered as a shortcut for actually finding an exact and unique solution. The latter would require to specify all sorts of detailed properties up to the point where only one single object can satisfy all conditions. On the other hand, leaving some properties unspecified would yield a set of solutions (and the user code issuing the query had to provide means for selecting one solution from this set). Just falling back on the //default// means that the user code actually doesn't care for any additional properties (as long as the properties he //does// care for are satisfied). Nothing is said specifically on //how//&nbsp; this default gets configured; actually there can be rules //somewhere,// and, additionally, anything encountered once while asking for a default can be re-used as default under similar circumstances. &rarr; [[implementing defaults|DefaultsImplementation]] -
+
//Access point to dependencies by-name.//
 In the Lumiera code base, we refrain from building or using a full-blown Dependency Injection Container. A lot of FUD has been spread regarding Dependency Injection and Singletons, to the point that a majority of developers confuses and conflates the ~Inversion-of-Control principle (which is essential) with the use of a ~DI-Container. Today, you can not even mention the word "Singleton" without everyone yelling out "Evil! Evil!" -- while most of these people just feel comfortable living in the metadata hell.
 
@@ -1984,10 +1984,19 @@ The following table lists averaged results in relative numbers, in relation to a
 |direct invoke shared local object                  |   15.13|   16.30|  ''1.00''|   1.59|
 |invoke existing object through unique_ptr  |   60.76|   63.20|     1.20|   1.64|
 |lazy init unprotected (not threadsafe)           |   27.29|   26.57|     2.37|   3.58|
-|lazy init always mutex protected                    | 179,62| 10917.18| 86.40| 6661.23|
-|Double Checked Locking with mutex          |   27,37|   26,27|    2.04|    3.26|
+|lazy init always mutex protected                    | 179.62| 10917.18| 86.40| 6661.23|
+|Double Checked Locking with mutex          |   27.37|   26.27|    2.04|    3.26|
+|DCL with std::atomic and mutex for init       |   44.06|   52.27|    2,79|    4.04|
 
-These benchmarks used a dummy service class holding a volatile int, initialised to a random value. The complete code was visible to the compiler and thus eligible for inlining. After accessing this dummy object through the means listed in the table, the benchmarked code retrieved this value repeatedly and compared it to zero. The concurrent measurement used 8 threads (number of cores); as expected, the unprotected lazy init crashed several times randomly during those tests.
+These benchmarks used a dummy service class holding a volatile int, initialised to a random value. The complete code was visible to the compiler and thus eligible for inlining. After accessing this dummy object through the means listed in the table, the benchmarked code retrieved this value repeatedly and compared it to zero. The concurrent measurement used 8 threads (number of cores). Some observations
+* The numbers obtained pretty much confirm [[other people's measurments|http://www.modernescpp.com/index.php/thread-safe-initialization-of-a-singleton]]
+* Synchronisation is indeed necessary; the unprotected lazy init crashed several times randomly during those tests.
+* Contention on concurrent access is very tangible; even for unguarded access the cache and memory hardware has to perform additional work
+* However, the concurrency situation in this example is rather extreme and deliberately provokes collisions; in practice we'd be closer to the single threaded case
+* Double Checked Locking is a very effective implementation strategy and results in timings within the same order of magnitude as direct access
+* Unprotected lazy initialisation performs spurious duplicate initialisations, which can be avoided by DCL
+* Naive Mutex locking is slow even without contention
+* Optimisation achieves access times of ≈ 1ns
 
diff --git a/wiki/thinkPad.ichthyo.mm b/wiki/thinkPad.ichthyo.mm index 81ef10087..e28dbbeaa 100644 --- a/wiki/thinkPad.ichthyo.mm +++ b/wiki/thinkPad.ichthyo.mm @@ -26914,8 +26914,8 @@ - - + + @@ -26926,7 +26926,7 @@ - + @@ -27347,8 +27347,8 @@ - - + + @@ -27404,10 +27404,10 @@ - - - - + + + + @@ -27437,7 +27437,9 @@ - + + + @@ -27660,8 +27662,8 @@ - - + + @@ -27675,22 +27677,23 @@ - - + + - - - + + + + - - + + - - + + - - + + @@ -27699,12 +27702,15 @@ - - - + + + + + +