2023-11-10 23:54:47 +01:00
|
|
|
|
/*
|
|
|
|
|
|
TEST-CHAIN-LOAD.hpp - produce a configurable synthetic computation load
|
|
|
|
|
|
|
|
|
|
|
|
Copyright (C) Lumiera.org
|
|
|
|
|
|
2023, Hermann Vosseler <Ichthyostega@web.de>
|
|
|
|
|
|
|
|
|
|
|
|
This program is free software; you can redistribute it and/or
|
|
|
|
|
|
modify it under the terms of the GNU General Public License as
|
|
|
|
|
|
published by the Free Software Foundation; either version 2 of
|
|
|
|
|
|
the License, or (at your option) any later version.
|
|
|
|
|
|
|
|
|
|
|
|
This program is distributed in the hope that it will be useful,
|
|
|
|
|
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
|
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
|
|
GNU General Public License for more details.
|
|
|
|
|
|
|
|
|
|
|
|
You should have received a copy of the GNU General Public License
|
|
|
|
|
|
along with this program; if not, write to the Free Software
|
|
|
|
|
|
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
|
|
|
|
|
|
|
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
|
|
/** @file test-chain-load.hpp
|
|
|
|
|
|
** Generate synthetic computation load for Scheduler performance tests.
|
|
|
|
|
|
** The [Scheduler](\ref scheduler.hpp) is a service to invoke Render Job instances
|
|
|
|
|
|
** concurrently in accordance to a time plan. To investigate the runtime and performance
|
|
|
|
|
|
** characteristics of the implementation, a well-defined artificial computation load is
|
|
|
|
|
|
** necessary, comprised of the invocation of an extended number of Jobs, each configured
|
|
|
|
|
|
** to carry out a reproducible computation. Data dependencies between jobs can be established
|
2023-12-04 16:29:57 +01:00
|
|
|
|
** to verify handling of dependent jobs and job completion messages within the scheduler.
|
2023-11-10 23:54:47 +01:00
|
|
|
|
**
|
2023-11-15 03:09:36 +01:00
|
|
|
|
** # Random computation structure
|
|
|
|
|
|
** A system of connected hash values is used as computation load, akin to a blockchain.
|
|
|
|
|
|
** Each processing step is embodied into a node, with a hash value computed by combining
|
|
|
|
|
|
** all predecessor nodes. Connectivity is represented as bidirectional pointer links, each
|
|
|
|
|
|
** nodes knows its predecessors and successors (if any), while the maximum _fan out_ or
|
|
|
|
|
|
** _fan in_ and the total number of nodes is limited statically. All nodes are placed
|
|
|
|
|
|
** into a single pre-allocated memory block and always processed in ascending dependency
|
|
|
|
|
|
** order. The result hash from complete processing can thus be computed by a single linear
|
|
|
|
|
|
** pass over all nodes; yet alternatively each node can be _scheduled_ as an individual
|
|
|
|
|
|
** computation job, which obviously requires that it's predecessor nodes have already
|
|
|
|
|
|
** been computed, otherwise the resulting hash will not match up with expectation.
|
2023-12-11 19:42:23 +01:00
|
|
|
|
** If significant per-node computation time is required, an artificial computational
|
|
|
|
|
|
** load can be generated, controlled by a weight setting computed for each node.
|
2023-11-10 23:54:47 +01:00
|
|
|
|
**
|
2023-11-15 03:09:36 +01:00
|
|
|
|
** The topology of connectivity is generated randomly, yet completely deterministic through
|
|
|
|
|
|
** configurable _control functions_ driven by each node's (hash)value. This way, each node
|
|
|
|
|
|
** can optionally fork out to several successor nodes, but can also reduce and combine its
|
|
|
|
|
|
** predecessor nodes; additionally, new chains can be spawned (to simulate the effect of
|
2023-11-30 02:13:39 +01:00
|
|
|
|
** data loading Jobs without predecessor) and chains can be deliberately pruned, possibly
|
|
|
|
|
|
** splitting the computation into several disjoint sub-graphs. Anyway, the computation always
|
|
|
|
|
|
** begins with the _root node_, proceeds over the node links and finally connects any open
|
|
|
|
|
|
** chains of computation to the _top node,_ leaving no dead end. The probabilistic rules
|
|
|
|
|
|
** controlling the topology can be configured using the lib::RandomDraw component, allowing
|
|
|
|
|
|
** either just to set a fixed probability or to define elaborate dynamic configurations
|
|
|
|
|
|
** based on the graph height or node connectivity properties.
|
2023-12-11 19:42:23 +01:00
|
|
|
|
** - expansionRule: controls forking of the graph behind the current node
|
|
|
|
|
|
** - reductionRule: controls joining of the graph into a combining successor node
|
|
|
|
|
|
** - seedingRule: controls injection of new start nodes in the middle of the graph
|
|
|
|
|
|
** - pruningRule: controls insertion of exit nodes to cut-off some chain immediately
|
2023-12-11 22:55:11 +01:00
|
|
|
|
** - weightRule: controls assignment of a Node::weight to command the ComputationalLoad
|
2023-11-15 03:09:36 +01:00
|
|
|
|
**
|
|
|
|
|
|
** ## Usage
|
|
|
|
|
|
** A TestChainLoad instance is created with predetermined maximum fan factor and a fixed
|
|
|
|
|
|
** number of nodes, which are immediately allocated and initialised. Using _builder notation,_
|
|
|
|
|
|
** control functions can then be configured. The [topology generation](\ref TestChainLoad::buildTopology)
|
|
|
|
|
|
** then traverses the nodes, starting with the seed value from the root node, and establishes
|
|
|
|
|
|
** the complete node connectivity. After this priming, the expected result hash should be
|
2023-12-11 22:55:11 +01:00
|
|
|
|
** [retrieved](\ref TestChainLoad::getHash). The node structure can then be traversed or
|
2023-11-15 03:09:36 +01:00
|
|
|
|
** [scheduled as Render Jobs](\ref TestChainLoad::scheduleJobs).
|
2023-11-10 23:54:47 +01:00
|
|
|
|
**
|
|
|
|
|
|
** ## Observation tools
|
2023-11-16 17:16:27 +01:00
|
|
|
|
** The generated topology can be visualised as a graph, using the Graphviz-DOT language.
|
|
|
|
|
|
** Nodes are rendered from bottom to top, organised into strata according to the time-level
|
|
|
|
|
|
** and showing predecessor -> successor connectivity. Seed nodes are distinguished by
|
|
|
|
|
|
** circular shape.
|
2023-11-10 23:54:47 +01:00
|
|
|
|
**
|
2023-12-11 22:55:11 +01:00
|
|
|
|
** The complete graph can be [performed synchronously](\ref TestChainLoad::performGraphSynchronously),
|
|
|
|
|
|
** allowing to watch a [baseline run-time](\ref TestChainLoad::calcRuntimeReference) when execution
|
|
|
|
|
|
** all nodes consecutively, using the configured load but without any time gaps. The run time
|
|
|
|
|
|
** in µs can be compared to the timings observed when performing the graph through the Scheduler.
|
2023-12-11 19:42:23 +01:00
|
|
|
|
** Moreover, Statistics can be computed over the generated graph, allowing to draw some
|
|
|
|
|
|
** conclusions regarding node distribution and connectivity.
|
|
|
|
|
|
**
|
2023-11-10 23:54:47 +01:00
|
|
|
|
** @see TestChainLoad_test
|
|
|
|
|
|
** @see SchedulerStress_test
|
2023-11-26 03:04:59 +01:00
|
|
|
|
** @see random-draw.hpp
|
2023-11-10 23:54:47 +01:00
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#ifndef VAULT_GEAR_TEST_TEST_CHAIN_LOAD_H
|
|
|
|
|
|
#define VAULT_GEAR_TEST_TEST_CHAIN_LOAD_H
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#include "vault/common.hpp"
|
2023-11-30 02:13:39 +01:00
|
|
|
|
#include "lib/test/transiently.hpp"
|
2023-11-10 23:54:47 +01:00
|
|
|
|
|
2023-12-03 23:33:06 +01:00
|
|
|
|
#include "vault/gear/job.h"
|
2023-12-05 23:53:42 +01:00
|
|
|
|
#include "vault/gear/scheduler.hpp"
|
2023-12-08 04:22:12 +01:00
|
|
|
|
#include "vault/gear/special-job-fun.hpp"
|
2023-12-05 23:53:42 +01:00
|
|
|
|
#include "lib/uninitialised-storage.hpp"
|
2023-12-10 19:58:18 +01:00
|
|
|
|
#include "lib/test/microbenchmark.hpp"
|
2023-12-03 23:33:06 +01:00
|
|
|
|
#include "lib/time/timevalue.hpp"
|
|
|
|
|
|
#include "lib/time/quantiser.hpp"
|
2023-11-16 01:46:55 +01:00
|
|
|
|
#include "lib/iter-explorer.hpp"
|
2023-11-28 16:25:22 +01:00
|
|
|
|
#include "lib/format-string.hpp"
|
2023-11-16 18:42:36 +01:00
|
|
|
|
#include "lib/format-cout.hpp"
|
2023-11-23 20:55:30 +01:00
|
|
|
|
#include "lib/random-draw.hpp"
|
2023-11-16 17:16:27 +01:00
|
|
|
|
#include "lib/dot-gen.hpp"
|
2023-11-16 21:38:06 +01:00
|
|
|
|
#include "lib/util.hpp"
|
2023-11-10 23:54:47 +01:00
|
|
|
|
|
2023-11-11 23:55:11 +01:00
|
|
|
|
#include <boost/functional/hash.hpp>
|
2023-11-12 23:31:08 +01:00
|
|
|
|
#include <functional>
|
2023-11-12 19:36:27 +01:00
|
|
|
|
#include <utility>
|
2023-12-05 23:53:42 +01:00
|
|
|
|
#include <future>
|
2023-11-11 23:23:23 +01:00
|
|
|
|
#include <memory>
|
2023-11-16 01:18:56 +01:00
|
|
|
|
#include <string>
|
2023-12-05 23:53:42 +01:00
|
|
|
|
#include <vector>
|
2023-11-29 02:58:55 +01:00
|
|
|
|
#include <tuple>
|
2023-11-11 23:23:23 +01:00
|
|
|
|
#include <array>
|
2023-11-10 23:54:47 +01:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
namespace vault{
|
|
|
|
|
|
namespace gear {
|
|
|
|
|
|
namespace test {
|
|
|
|
|
|
|
2023-11-28 16:25:22 +01:00
|
|
|
|
using util::_Fmt;
|
2023-11-16 21:38:06 +01:00
|
|
|
|
using util::min;
|
2023-11-12 23:31:08 +01:00
|
|
|
|
using util::max;
|
2023-11-26 18:25:10 +01:00
|
|
|
|
using util::isnil;
|
2023-11-16 21:38:06 +01:00
|
|
|
|
using util::limited;
|
2023-11-12 16:56:39 +01:00
|
|
|
|
using util::unConst;
|
2023-11-16 18:42:36 +01:00
|
|
|
|
using util::toString;
|
|
|
|
|
|
using util::showHashLSB;
|
2023-12-11 19:42:23 +01:00
|
|
|
|
using lib::time::Time;
|
|
|
|
|
|
using lib::time::TimeValue;
|
|
|
|
|
|
using lib::time::FrameRate;
|
|
|
|
|
|
using lib::time::Duration;
|
|
|
|
|
|
using lib::test::benchmarkTime;
|
|
|
|
|
|
using lib::test::microBenchmark;
|
2023-11-30 02:13:39 +01:00
|
|
|
|
using lib::test::Transiently;
|
2023-12-11 19:42:23 +01:00
|
|
|
|
using lib::meta::_FunRet;
|
2023-11-23 20:55:30 +01:00
|
|
|
|
|
2023-12-11 19:42:23 +01:00
|
|
|
|
using std::string;
|
|
|
|
|
|
using std::function;
|
2023-12-10 23:13:05 +01:00
|
|
|
|
using std::make_pair;
|
2023-12-04 16:29:57 +01:00
|
|
|
|
using std::forward;
|
|
|
|
|
|
using std::string;
|
2023-11-12 19:36:27 +01:00
|
|
|
|
using std::swap;
|
|
|
|
|
|
using std::move;
|
2023-12-05 23:53:42 +01:00
|
|
|
|
using std::chrono_literals::operator ""s;
|
2023-11-10 23:54:47 +01:00
|
|
|
|
|
2023-12-05 23:53:42 +01:00
|
|
|
|
namespace err = lumiera::error;
|
2023-11-16 17:16:27 +01:00
|
|
|
|
namespace dot = lib::dot_gen;
|
2023-11-10 23:54:47 +01:00
|
|
|
|
|
2023-12-10 22:09:46 +01:00
|
|
|
|
namespace { // Default definitions for structured load testing
|
2023-12-05 23:53:42 +01:00
|
|
|
|
|
2023-12-10 22:09:46 +01:00
|
|
|
|
const size_t DEFAULT_FAN = 16; ///< default maximum connectivity per Node
|
|
|
|
|
|
const size_t DEFAULT_SIZ = 256; ///< default node count for the complete load graph
|
|
|
|
|
|
|
|
|
|
|
|
const auto SAFETY_TIMEOUT = 5s; ///< maximum time limit for test run, abort if exceeded
|
|
|
|
|
|
const auto STANDARD_DEADLINE = 10ms; ///< deadline to use for each individual computation job
|
|
|
|
|
|
const size_t DEFAULT_CHUNKSIZE = 64; ///< number of computation jobs to prepare in each planning round
|
2023-12-11 22:55:11 +01:00
|
|
|
|
const size_t GRAPH_BENCHMARK_RUNS = 5; ///< repetition count for reference calculation of a complete node graph
|
2023-12-10 22:09:46 +01:00
|
|
|
|
const size_t LOAD_BENCHMARK_RUNS = 500; ///< repetition count for calibration benchmark for ComputationalLoad
|
|
|
|
|
|
const double LOAD_SPEED_BASELINE = 100; ///< initial assumption for calculation speed (without calibration)
|
2023-12-11 19:42:23 +01:00
|
|
|
|
const microseconds LOAD_DEFAULT_TIME = 100us; ///< default time delay produced by ComputationalLoad at `Node.weight==1`
|
|
|
|
|
|
const size_t LOAD_DEFAULT_MEM_SIZE = 1000; ///< default allocation base size used if ComputationalLoad.useAllocation
|
2023-12-23 19:33:55 +01:00
|
|
|
|
const Duration SCHEDULE_LEVEL_STEP{_uTicks(1ms)}; ///< time budget to plan for the calculation of each »time level« of jobs
|
|
|
|
|
|
const Duration SCHEDULE_PLAN_STEP{_uTicks(100us)}; ///< time budget to reserve for each node to be planned and scheduled
|
2023-11-15 03:09:36 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-30 23:40:13 +01:00
|
|
|
|
struct LevelWeight
|
|
|
|
|
|
{
|
|
|
|
|
|
size_t level{0};
|
|
|
|
|
|
size_t nodes{0};
|
2023-12-31 21:59:16 +01:00
|
|
|
|
size_t endidx{0};
|
2023-12-30 23:40:13 +01:00
|
|
|
|
size_t weight{0};
|
|
|
|
|
|
};
|
2023-12-11 19:42:23 +01:00
|
|
|
|
|
2023-12-31 02:04:23 +01:00
|
|
|
|
/**
|
|
|
|
|
|
* simplified model for expense of a node, computed concurrently.
|
|
|
|
|
|
* @remark assumptions of this model
|
|
|
|
|
|
* - weight factor describes expense to compute this node
|
|
|
|
|
|
* - nodes on the same level can be parallelised without limitation
|
|
|
|
|
|
* - no consideration of stacking / ordering of tasks; rather the speed-up
|
|
|
|
|
|
* is applied as an average factor to the summed node weights for a level
|
|
|
|
|
|
* @return guess for a compounded weight factor
|
|
|
|
|
|
*/
|
|
|
|
|
|
inline double
|
|
|
|
|
|
computeWeightFactor (LevelWeight const& lw, uint concurrency)
|
|
|
|
|
|
{
|
|
|
|
|
|
REQUIRE (0 < concurrency);
|
|
|
|
|
|
double speedUp = lw.nodes? lw.nodes / std::ceil (double(lw.nodes)/concurrency)
|
|
|
|
|
|
: 1.0;
|
|
|
|
|
|
ENSURE (1.0 <= speedUp);
|
|
|
|
|
|
return lw.weight / speedUp;
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-11-28 02:18:38 +01:00
|
|
|
|
struct Statistic;
|
|
|
|
|
|
|
2023-11-15 03:09:36 +01:00
|
|
|
|
|
2023-11-16 21:38:06 +01:00
|
|
|
|
|
2023-11-10 23:54:47 +01:00
|
|
|
|
|
2023-11-26 03:04:59 +01:00
|
|
|
|
/***********************************************************************//**
|
2023-11-10 23:54:47 +01:00
|
|
|
|
* A Generator for synthetic Render Jobs for Scheduler load testing.
|
2023-11-26 03:04:59 +01:00
|
|
|
|
* Allocates a fixed set of #numNodes and generates connecting topology.
|
2023-11-11 23:23:23 +01:00
|
|
|
|
* @tparam maxFan maximal fan-in/out from a node, also limits maximal parallel strands.
|
2023-11-10 23:54:47 +01:00
|
|
|
|
* @see TestChainLoad_test
|
|
|
|
|
|
*/
|
2023-12-04 03:15:46 +01:00
|
|
|
|
template<size_t maxFan =DEFAULT_FAN>
|
2023-11-10 23:54:47 +01:00
|
|
|
|
class TestChainLoad
|
2023-11-12 19:36:27 +01:00
|
|
|
|
: util::MoveOnly
|
2023-11-10 23:54:47 +01:00
|
|
|
|
{
|
|
|
|
|
|
|
|
|
|
|
|
public:
|
2023-11-23 20:55:30 +01:00
|
|
|
|
/** Graph Data structure */
|
2023-11-11 23:23:23 +01:00
|
|
|
|
struct Node
|
|
|
|
|
|
: util::MoveOnly
|
|
|
|
|
|
{
|
2023-11-12 16:56:39 +01:00
|
|
|
|
using _Arr = std::array<Node*, maxFan>;
|
|
|
|
|
|
using Iter = typename _Arr::iterator;
|
2023-11-16 01:46:55 +01:00
|
|
|
|
using CIter = typename _Arr::const_iterator;
|
2023-11-12 16:56:39 +01:00
|
|
|
|
|
|
|
|
|
|
/** Table with connections to other Node records */
|
|
|
|
|
|
struct Tab : _Arr
|
|
|
|
|
|
{
|
|
|
|
|
|
Iter after = _Arr::begin();
|
2023-11-16 01:46:55 +01:00
|
|
|
|
|
|
|
|
|
|
Iter end() { return after; }
|
|
|
|
|
|
CIter end() const{ return after; }
|
|
|
|
|
|
friend Iter end (Tab & tab){ return tab.end(); }
|
|
|
|
|
|
friend CIter end (Tab const& tab){ return tab.end(); }
|
2023-11-12 16:56:39 +01:00
|
|
|
|
|
2023-11-16 23:50:42 +01:00
|
|
|
|
Node* front() { return empty()? nullptr : _Arr::front(); }
|
|
|
|
|
|
Node* back() { return empty()? nullptr : *(after-1); }
|
|
|
|
|
|
|
2023-11-12 19:36:27 +01:00
|
|
|
|
void clear() { after = _Arr::begin(); } ///< @warning pointer data in array not cleared
|
|
|
|
|
|
|
2023-11-12 16:56:39 +01:00
|
|
|
|
size_t size() const { return unConst(this)->end()-_Arr::begin(); }
|
|
|
|
|
|
bool empty() const { return 0 == size(); }
|
|
|
|
|
|
|
|
|
|
|
|
Iter
|
2023-11-12 19:36:27 +01:00
|
|
|
|
add(Node* n)
|
2023-11-12 16:56:39 +01:00
|
|
|
|
{
|
|
|
|
|
|
if (after != _Arr::end())
|
|
|
|
|
|
{
|
2023-11-12 19:36:27 +01:00
|
|
|
|
*after = n;
|
2023-11-12 16:56:39 +01:00
|
|
|
|
return after++;
|
|
|
|
|
|
}
|
|
|
|
|
|
NOTREACHED ("excess node linkage");
|
|
|
|
|
|
}
|
2023-11-12 19:36:27 +01:00
|
|
|
|
|
2023-11-12 16:56:39 +01:00
|
|
|
|
};
|
2023-11-11 23:55:11 +01:00
|
|
|
|
|
2023-11-11 23:23:23 +01:00
|
|
|
|
size_t hash;
|
2023-12-09 03:13:48 +01:00
|
|
|
|
size_t level{0}, weight{0};
|
2023-11-12 19:36:27 +01:00
|
|
|
|
Tab pred{0}, succ{0};
|
2023-11-11 23:23:23 +01:00
|
|
|
|
|
|
|
|
|
|
Node(size_t seed =0)
|
|
|
|
|
|
: hash{seed}
|
|
|
|
|
|
{ }
|
2023-11-11 23:55:11 +01:00
|
|
|
|
|
2023-11-12 23:31:08 +01:00
|
|
|
|
void
|
|
|
|
|
|
clear()
|
|
|
|
|
|
{
|
|
|
|
|
|
hash = 0;
|
2023-12-09 03:13:48 +01:00
|
|
|
|
level = weight = 0;
|
2023-11-12 23:31:08 +01:00
|
|
|
|
pred.clear();
|
|
|
|
|
|
succ.clear();
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-11-11 23:55:11 +01:00
|
|
|
|
Node&
|
2023-11-12 19:36:27 +01:00
|
|
|
|
addPred (Node* other)
|
2023-11-11 23:55:11 +01:00
|
|
|
|
{
|
2023-11-12 19:36:27 +01:00
|
|
|
|
REQUIRE (other);
|
|
|
|
|
|
pred.add (other);
|
|
|
|
|
|
other->succ.add (this);
|
|
|
|
|
|
return *this;
|
2023-11-11 23:55:11 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
Node&
|
2023-11-12 19:36:27 +01:00
|
|
|
|
addSucc (Node* other)
|
2023-11-11 23:55:11 +01:00
|
|
|
|
{
|
2023-11-12 19:36:27 +01:00
|
|
|
|
REQUIRE (other);
|
|
|
|
|
|
succ.add (other);
|
|
|
|
|
|
other->pred.add (this);
|
|
|
|
|
|
return *this;
|
2023-11-11 23:55:11 +01:00
|
|
|
|
}
|
2023-11-12 19:36:27 +01:00
|
|
|
|
Node& addPred(Node& other) { return addPred(&other); }
|
|
|
|
|
|
Node& addSucc(Node& other) { return addSucc(&other); }
|
2023-11-11 23:55:11 +01:00
|
|
|
|
|
|
|
|
|
|
size_t
|
|
|
|
|
|
calculate()
|
|
|
|
|
|
{
|
2023-12-04 16:29:57 +01:00
|
|
|
|
for (Node* entry: pred)
|
|
|
|
|
|
boost::hash_combine (hash, entry->hash);
|
2023-11-11 23:55:11 +01:00
|
|
|
|
return hash;
|
|
|
|
|
|
}
|
2023-11-26 22:28:12 +01:00
|
|
|
|
|
|
|
|
|
|
friend bool isStart (Node const& n) { return isnil (n.pred); };
|
|
|
|
|
|
friend bool isExit (Node const& n) { return isnil (n.succ); };
|
2023-11-27 21:58:37 +01:00
|
|
|
|
friend bool isInner (Node const& n) { return not (isStart(n) or isExit(n)); }
|
2023-11-28 03:03:55 +01:00
|
|
|
|
friend bool isFork (Node const& n) { return 1 < n.succ.size(); }
|
|
|
|
|
|
friend bool isJoin (Node const& n) { return 1 < n.pred.size(); }
|
|
|
|
|
|
friend bool isLink (Node const& n) { return 1 == n.pred.size() and 1 == n.succ.size(); }
|
|
|
|
|
|
friend bool isKnot (Node const& n) { return isFork(n) and isJoin(n); }
|
|
|
|
|
|
|
2023-11-27 21:58:37 +01:00
|
|
|
|
|
2023-11-28 03:03:55 +01:00
|
|
|
|
friend bool isStart (Node const* n) { return n and isStart(*n); };
|
|
|
|
|
|
friend bool isExit (Node const* n) { return n and isExit (*n); };
|
|
|
|
|
|
friend bool isInner (Node const* n) { return n and isInner(*n); };
|
|
|
|
|
|
friend bool isFork (Node const* n) { return n and isFork (*n); };
|
|
|
|
|
|
friend bool isJoin (Node const* n) { return n and isJoin (*n); };
|
|
|
|
|
|
friend bool isLink (Node const* n) { return n and isLink (*n); };
|
|
|
|
|
|
friend bool isKnot (Node const* n) { return n and isKnot (*n); };
|
2023-11-11 23:23:23 +01:00
|
|
|
|
};
|
2023-11-23 20:55:30 +01:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/** link Node.hash to random parameter generation */
|
|
|
|
|
|
class NodeControlBinding;
|
|
|
|
|
|
|
|
|
|
|
|
/** Parameter values limited [0 .. maxFan] */
|
|
|
|
|
|
using Param = lib::Limited<size_t, maxFan>;
|
|
|
|
|
|
|
|
|
|
|
|
/** Topology is governed by rules for random params */
|
|
|
|
|
|
using Rule = lib::RandomDraw<NodeControlBinding>;
|
2023-11-11 23:23:23 +01:00
|
|
|
|
|
|
|
|
|
|
private:
|
2023-11-12 19:36:27 +01:00
|
|
|
|
using NodeTab = typename Node::Tab;
|
2023-12-04 03:15:46 +01:00
|
|
|
|
using NodeIT = lib::RangeIter<Node*>;
|
2023-11-11 23:23:23 +01:00
|
|
|
|
|
2023-12-04 03:15:46 +01:00
|
|
|
|
std::unique_ptr<Node[]> nodes_;
|
|
|
|
|
|
size_t numNodes_;
|
2023-11-11 23:23:23 +01:00
|
|
|
|
|
2023-11-26 03:04:59 +01:00
|
|
|
|
Rule seedingRule_ {};
|
|
|
|
|
|
Rule expansionRule_{};
|
|
|
|
|
|
Rule reductionRule_{};
|
2023-11-26 20:57:13 +01:00
|
|
|
|
Rule pruningRule_ {};
|
2023-12-09 03:13:48 +01:00
|
|
|
|
Rule weightRule_ {};
|
2023-11-12 23:31:08 +01:00
|
|
|
|
|
2023-12-04 03:15:46 +01:00
|
|
|
|
Node* frontNode() { return &nodes_[0]; }
|
|
|
|
|
|
Node* afterNode() { return &nodes_[numNodes_]; }
|
|
|
|
|
|
Node* backNode() { return &nodes_[numNodes_-1];}
|
2023-11-12 19:36:27 +01:00
|
|
|
|
|
2023-12-04 03:15:46 +01:00
|
|
|
|
public:
|
|
|
|
|
|
explicit
|
|
|
|
|
|
TestChainLoad(size_t nodeCnt =DEFAULT_SIZ)
|
|
|
|
|
|
: nodes_{new Node[nodeCnt]}
|
|
|
|
|
|
, numNodes_{nodeCnt}
|
|
|
|
|
|
{
|
|
|
|
|
|
REQUIRE (1 < nodeCnt);
|
|
|
|
|
|
}
|
2023-11-12 19:36:27 +01:00
|
|
|
|
|
|
|
|
|
|
|
2023-12-06 02:17:02 +01:00
|
|
|
|
size_t size() const { return numNodes_; }
|
2023-12-04 03:15:46 +01:00
|
|
|
|
size_t topLevel() const { return unConst(this)->backNode()->level; }
|
|
|
|
|
|
size_t getSeed() const { return unConst(this)->frontNode()->hash; }
|
2023-11-12 19:36:27 +01:00
|
|
|
|
|
2023-11-16 01:46:55 +01:00
|
|
|
|
|
2023-12-04 03:15:46 +01:00
|
|
|
|
auto
|
2023-11-16 01:46:55 +01:00
|
|
|
|
allNodes()
|
|
|
|
|
|
{
|
2023-12-04 03:15:46 +01:00
|
|
|
|
return lib::explore (NodeIT{frontNode(),afterNode()});
|
2023-11-16 01:46:55 +01:00
|
|
|
|
}
|
2023-11-27 21:58:37 +01:00
|
|
|
|
auto
|
|
|
|
|
|
allNodePtr()
|
|
|
|
|
|
{
|
|
|
|
|
|
return allNodes().asPtr();
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-09 02:36:14 +01:00
|
|
|
|
auto
|
|
|
|
|
|
allExitNodes()
|
|
|
|
|
|
{
|
|
|
|
|
|
return allNodes().filter([](Node& n){ return isExit(n); });
|
|
|
|
|
|
}
|
|
|
|
|
|
auto
|
|
|
|
|
|
allExitHashes() const
|
|
|
|
|
|
{
|
|
|
|
|
|
return unConst(this)->allExitNodes().transform([](Node& n){ return n.hash; });
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/** global hash is the combination of all exit node hashes != 0 */
|
|
|
|
|
|
size_t
|
|
|
|
|
|
getHash() const
|
|
|
|
|
|
{
|
|
|
|
|
|
auto combineBoostHashes = [](size_t h, size_t hx){ boost::hash_combine(h,hx); return h;};
|
|
|
|
|
|
return allExitHashes()
|
|
|
|
|
|
.filter([](size_t h){ return h != 0; })
|
|
|
|
|
|
.reduce(lib::iter_explorer::IDENTITY
|
|
|
|
|
|
,combineBoostHashes
|
|
|
|
|
|
);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-11-27 21:58:37 +01:00
|
|
|
|
/** @return the node's index number, based on its storage location */
|
2023-12-04 03:15:46 +01:00
|
|
|
|
size_t nodeID(Node const* n){ return n - frontNode(); };
|
2023-11-27 21:58:37 +01:00
|
|
|
|
size_t nodeID(Node const& n){ return nodeID (&n); };
|
|
|
|
|
|
|
2023-11-16 01:46:55 +01:00
|
|
|
|
|
|
|
|
|
|
|
2023-11-12 19:36:27 +01:00
|
|
|
|
/* ===== topology control ===== */
|
|
|
|
|
|
|
2023-11-16 21:38:06 +01:00
|
|
|
|
TestChainLoad&&
|
2023-11-26 18:25:10 +01:00
|
|
|
|
seedingRule (Rule r)
|
2023-11-16 21:38:06 +01:00
|
|
|
|
{
|
2023-11-26 03:04:59 +01:00
|
|
|
|
seedingRule_ = move(r);
|
2023-11-16 21:38:06 +01:00
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
TestChainLoad&&
|
2023-11-26 18:25:10 +01:00
|
|
|
|
expansionRule (Rule r)
|
2023-11-16 21:38:06 +01:00
|
|
|
|
{
|
2023-11-26 03:04:59 +01:00
|
|
|
|
expansionRule_ = move(r);
|
2023-11-16 21:38:06 +01:00
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
TestChainLoad&&
|
2023-11-26 18:25:10 +01:00
|
|
|
|
reductionRule (Rule r)
|
2023-11-16 21:38:06 +01:00
|
|
|
|
{
|
2023-11-26 03:04:59 +01:00
|
|
|
|
reductionRule_ = move(r);
|
2023-11-16 21:38:06 +01:00
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-11-26 20:57:13 +01:00
|
|
|
|
TestChainLoad&&
|
|
|
|
|
|
pruningRule (Rule r)
|
|
|
|
|
|
{
|
|
|
|
|
|
pruningRule_ = move(r);
|
|
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-09 03:13:48 +01:00
|
|
|
|
TestChainLoad&&
|
|
|
|
|
|
weightRule (Rule r)
|
|
|
|
|
|
{
|
|
|
|
|
|
weightRule_ = move(r);
|
|
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-11-16 21:38:06 +01:00
|
|
|
|
|
2023-11-30 03:20:23 +01:00
|
|
|
|
/** Abbreviation for starting rules */
|
|
|
|
|
|
static Rule rule() { return Rule(); }
|
2023-12-11 19:42:23 +01:00
|
|
|
|
static Rule value(size_t v) { return Rule().fixedVal(v); }
|
2023-11-30 03:20:23 +01:00
|
|
|
|
|
|
|
|
|
|
static Rule
|
|
|
|
|
|
rule_atStart (uint v)
|
|
|
|
|
|
{
|
|
|
|
|
|
return Rule().mapping([v](Node* n)
|
|
|
|
|
|
{
|
|
|
|
|
|
return isStart(n)? Rule().fixedVal(v)
|
|
|
|
|
|
: Rule();
|
|
|
|
|
|
});
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-01 04:50:11 +01:00
|
|
|
|
static Rule
|
|
|
|
|
|
rule_atJoin (uint v)
|
|
|
|
|
|
{
|
|
|
|
|
|
return Rule().mapping([v](Node* n)
|
|
|
|
|
|
{
|
|
|
|
|
|
return isJoin(n) ? Rule().fixedVal(v)
|
|
|
|
|
|
: Rule();
|
|
|
|
|
|
});
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static Rule
|
|
|
|
|
|
rule_atLink (uint v)
|
|
|
|
|
|
{
|
|
|
|
|
|
return Rule().mapping([v](Node* n)
|
|
|
|
|
|
{ // NOTE: when applying these rules,
|
|
|
|
|
|
// successors are not yet wired...
|
|
|
|
|
|
return not (isJoin(n) or isStart(n))
|
|
|
|
|
|
? Rule().fixedVal(v)
|
|
|
|
|
|
: Rule();
|
|
|
|
|
|
});
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static Rule
|
|
|
|
|
|
rule_atJoin_else (double p1, double p2, uint v=1)
|
|
|
|
|
|
{
|
|
|
|
|
|
return Rule().mapping([p1,p2,v](Node* n)
|
|
|
|
|
|
{
|
|
|
|
|
|
return isJoin(n) ? Rule().probability(p1).maxVal(v)
|
|
|
|
|
|
: Rule().probability(p2).maxVal(v);
|
|
|
|
|
|
});
|
|
|
|
|
|
}
|
2023-12-06 02:17:02 +01:00
|
|
|
|
|
|
|
|
|
|
|
2023-12-11 19:42:23 +01:00
|
|
|
|
/** preconfigured topology: isolated simple 2-step chains */
|
|
|
|
|
|
TestChainLoad&&
|
|
|
|
|
|
configureShape_short_chains2()
|
|
|
|
|
|
{
|
|
|
|
|
|
pruningRule(rule().probability(0.8));
|
|
|
|
|
|
weightRule(value(1));
|
|
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/** preconfigured topology: simple 3-step chains, starting interleaved */
|
|
|
|
|
|
TestChainLoad&&
|
|
|
|
|
|
configureShape_short_chains3_interleaved()
|
|
|
|
|
|
{
|
|
|
|
|
|
pruningRule(rule().probability(0.6));
|
|
|
|
|
|
seedingRule(rule_atStart(1));
|
|
|
|
|
|
weightRule(value(1));
|
|
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-06 02:17:02 +01:00
|
|
|
|
/** preconfigured topology: simple interwoven 3-step graph segments */
|
|
|
|
|
|
TestChainLoad&&
|
2023-12-11 19:42:23 +01:00
|
|
|
|
configureShape_short_segments3_interleaved()
|
2023-12-06 02:17:02 +01:00
|
|
|
|
{
|
|
|
|
|
|
seedingRule(rule().probability(0.8).maxVal(1));
|
|
|
|
|
|
reductionRule(rule().probability(0.75).maxVal(3));
|
|
|
|
|
|
pruningRule(rule_atJoin(1));
|
2023-12-11 19:42:23 +01:00
|
|
|
|
weightRule(value(1));
|
2023-12-06 02:17:02 +01:00
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
2023-12-18 23:34:10 +01:00
|
|
|
|
|
|
|
|
|
|
/** preconfigured topology: single graph with massive »load bursts«
|
|
|
|
|
|
* @see TestChainLoad_test::showcase_StablePattern() */
|
|
|
|
|
|
TestChainLoad&&
|
|
|
|
|
|
configureShape_chain_loadBursts()
|
|
|
|
|
|
{
|
|
|
|
|
|
expansionRule(rule().probability(0.27).maxVal(4));
|
|
|
|
|
|
reductionRule(rule().probability(0.44).maxVal(6).minVal(2));
|
|
|
|
|
|
weightRule (rule().probability(0.66).maxVal(3));
|
|
|
|
|
|
setSeed(55); // ◁─────── produces a prelude with parallel chains,
|
|
|
|
|
|
return move(*this);// then fork at level 17 followed by bursts of load.
|
|
|
|
|
|
}
|
2023-12-01 04:50:11 +01:00
|
|
|
|
|
2023-11-30 03:20:23 +01:00
|
|
|
|
|
|
|
|
|
|
|
2023-11-12 19:36:27 +01:00
|
|
|
|
/**
|
|
|
|
|
|
* Use current configuration and seed to (re)build Node connectivity.
|
2023-11-30 02:13:39 +01:00
|
|
|
|
* While working in-place, the wiring and thus the resulting hash values
|
|
|
|
|
|
* are completely rewritten, progressing from start and controlled by
|
|
|
|
|
|
* evaluating the _drawing rules_ on the current node, computing its hash.
|
2023-11-12 19:36:27 +01:00
|
|
|
|
*/
|
2023-11-16 18:42:36 +01:00
|
|
|
|
TestChainLoad&&
|
2023-11-12 19:36:27 +01:00
|
|
|
|
buildToplolgy()
|
|
|
|
|
|
{
|
2023-11-12 23:31:08 +01:00
|
|
|
|
NodeTab a,b, // working data for generation
|
|
|
|
|
|
*curr{&a}, // the current set of nodes to carry on
|
|
|
|
|
|
*next{&b}; // the next set of nodes connected to current
|
2023-12-04 03:15:46 +01:00
|
|
|
|
Node* node = frontNode();
|
2023-11-12 19:36:27 +01:00
|
|
|
|
size_t level{0};
|
2023-11-26 03:04:59 +01:00
|
|
|
|
|
2023-11-30 02:13:39 +01:00
|
|
|
|
// transient snapshot of rules (non-copyable, once engaged)
|
|
|
|
|
|
Transiently originalExpansionRule{expansionRule_};
|
|
|
|
|
|
Transiently originalReductionRule{reductionRule_};
|
|
|
|
|
|
Transiently originalseedingRule {seedingRule_};
|
|
|
|
|
|
Transiently originalPruningRule {pruningRule_};
|
2023-12-09 03:13:48 +01:00
|
|
|
|
Transiently originalWeightRule {weightRule_};
|
2023-11-12 23:31:08 +01:00
|
|
|
|
|
|
|
|
|
|
// prepare building blocks for the topology generation...
|
2023-11-16 23:50:42 +01:00
|
|
|
|
auto moreNext = [&]{ return next->size() < maxFan; };
|
2023-12-04 03:15:46 +01:00
|
|
|
|
auto moreNodes = [&]{ return node < backNode(); };
|
2023-11-16 23:50:42 +01:00
|
|
|
|
auto spaceLeft = [&]{ return moreNext() and moreNodes(); };
|
2023-11-26 22:28:12 +01:00
|
|
|
|
auto addNode = [&](size_t seed =0)
|
|
|
|
|
|
{
|
2023-11-12 23:31:08 +01:00
|
|
|
|
Node* n = *next->add (node++);
|
|
|
|
|
|
n->clear();
|
|
|
|
|
|
n->level = level;
|
2023-11-26 22:28:12 +01:00
|
|
|
|
n->hash = seed;
|
2023-11-12 23:31:08 +01:00
|
|
|
|
return n;
|
|
|
|
|
|
};
|
2023-11-26 03:04:59 +01:00
|
|
|
|
auto apply = [&](Rule& rule, Node* n)
|
2023-11-12 23:31:08 +01:00
|
|
|
|
{
|
2023-11-26 03:04:59 +01:00
|
|
|
|
return rule(n);
|
2023-11-12 23:31:08 +01:00
|
|
|
|
};
|
2023-12-11 22:55:11 +01:00
|
|
|
|
auto calcNode = [&](Node* n)
|
|
|
|
|
|
{
|
|
|
|
|
|
n->calculate();
|
|
|
|
|
|
n->weight = apply(weightRule_,n);
|
|
|
|
|
|
};
|
2023-11-10 23:54:47 +01:00
|
|
|
|
|
2023-11-12 23:31:08 +01:00
|
|
|
|
// visit all further nodes and establish links
|
2023-11-16 23:50:42 +01:00
|
|
|
|
while (moreNodes())
|
2023-11-12 19:36:27 +01:00
|
|
|
|
{
|
2023-11-12 23:31:08 +01:00
|
|
|
|
curr->clear();
|
|
|
|
|
|
swap (next, curr);
|
|
|
|
|
|
size_t toReduce{0};
|
2023-11-16 23:50:42 +01:00
|
|
|
|
Node* r = nullptr;
|
2023-11-12 23:31:08 +01:00
|
|
|
|
REQUIRE (spaceLeft());
|
|
|
|
|
|
for (Node* o : *curr)
|
|
|
|
|
|
{ // follow-up on all Nodes in current level...
|
2023-12-11 22:55:11 +01:00
|
|
|
|
calcNode(o);
|
2023-11-30 02:13:39 +01:00
|
|
|
|
if (apply (pruningRule_,o))
|
2023-11-26 20:57:13 +01:00
|
|
|
|
continue; // discontinue
|
2023-11-30 02:13:39 +01:00
|
|
|
|
size_t toSeed = apply (seedingRule_, o);
|
|
|
|
|
|
size_t toExpand = apply (expansionRule_,o);
|
2023-11-12 23:31:08 +01:00
|
|
|
|
while (0 < toSeed and spaceLeft())
|
|
|
|
|
|
{ // start a new chain from seed
|
2023-11-26 22:28:12 +01:00
|
|
|
|
addNode(this->getSeed());
|
2023-11-12 23:31:08 +01:00
|
|
|
|
--toSeed;
|
|
|
|
|
|
}
|
|
|
|
|
|
while (0 < toExpand and spaceLeft())
|
|
|
|
|
|
{ // fork out secondary chain from o
|
|
|
|
|
|
Node* n = addNode();
|
|
|
|
|
|
o->addSucc(n);
|
|
|
|
|
|
--toExpand;
|
|
|
|
|
|
}
|
2023-11-17 18:54:51 +01:00
|
|
|
|
if (not toReduce)
|
|
|
|
|
|
{ // carry-on chain from o
|
|
|
|
|
|
r = spaceLeft()? addNode():nullptr;
|
2023-11-30 02:13:39 +01:00
|
|
|
|
toReduce = apply (reductionRule_, o);
|
2023-11-12 23:31:08 +01:00
|
|
|
|
}
|
|
|
|
|
|
else
|
|
|
|
|
|
--toReduce;
|
2023-11-16 23:50:42 +01:00
|
|
|
|
if (r) // connect chain from o...
|
|
|
|
|
|
r->addPred(o);
|
|
|
|
|
|
else // space for successors is already exhausted
|
|
|
|
|
|
{ // can not carry-on, but must ensure no chain is broken
|
|
|
|
|
|
ENSURE (not next->empty());
|
|
|
|
|
|
if (o->succ.empty())
|
|
|
|
|
|
o->addSucc (next->back());
|
|
|
|
|
|
}
|
2023-11-12 23:31:08 +01:00
|
|
|
|
}
|
2023-11-26 20:57:13 +01:00
|
|
|
|
ENSURE (not isnil(next) or spaceLeft());
|
2023-11-26 22:28:12 +01:00
|
|
|
|
if (isnil(next)) // ensure graph continues
|
|
|
|
|
|
addNode(this->getSeed());
|
2023-11-16 23:50:42 +01:00
|
|
|
|
ENSURE (not next->empty());
|
2023-11-26 20:57:13 +01:00
|
|
|
|
++level;
|
2023-11-12 19:36:27 +01:00
|
|
|
|
}
|
2023-12-04 03:15:46 +01:00
|
|
|
|
ENSURE (node == backNode());
|
2023-11-12 23:31:08 +01:00
|
|
|
|
// connect ends of all remaining chains to top-Node
|
|
|
|
|
|
node->clear();
|
2023-11-26 20:57:13 +01:00
|
|
|
|
node->level = level;
|
2023-11-12 23:31:08 +01:00
|
|
|
|
for (Node* o : *next)
|
2023-11-17 02:15:50 +01:00
|
|
|
|
{
|
2023-12-11 22:55:11 +01:00
|
|
|
|
calcNode(o);
|
2023-11-17 02:15:50 +01:00
|
|
|
|
node->addPred(o);
|
|
|
|
|
|
}
|
2023-12-11 22:55:11 +01:00
|
|
|
|
calcNode(node);
|
2023-11-12 23:31:08 +01:00
|
|
|
|
//
|
2023-11-12 19:36:27 +01:00
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-11-16 18:42:36 +01:00
|
|
|
|
|
2023-11-26 22:28:12 +01:00
|
|
|
|
/**
|
|
|
|
|
|
* Set the overall seed value.
|
|
|
|
|
|
* @note does not propagate seed to consecutive start nodes
|
|
|
|
|
|
*/
|
|
|
|
|
|
TestChainLoad&&
|
|
|
|
|
|
setSeed (size_t seed = rand())
|
|
|
|
|
|
{
|
2023-12-04 03:15:46 +01:00
|
|
|
|
frontNode()->hash = seed;
|
2023-11-26 22:28:12 +01:00
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
|
* Recalculate all node hashes and propagate seed value.
|
|
|
|
|
|
*/
|
|
|
|
|
|
TestChainLoad&&
|
|
|
|
|
|
recalculate()
|
|
|
|
|
|
{
|
|
|
|
|
|
size_t seed = this->getSeed();
|
|
|
|
|
|
for (Node& n : allNodes())
|
|
|
|
|
|
{
|
|
|
|
|
|
n.hash = isStart(n)? seed : 0;
|
|
|
|
|
|
n.calculate();
|
|
|
|
|
|
}
|
|
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
|
* Clear node hashes and propagate seed value.
|
|
|
|
|
|
*/
|
|
|
|
|
|
TestChainLoad&&
|
|
|
|
|
|
clearNodeHashes()
|
|
|
|
|
|
{
|
|
|
|
|
|
size_t seed = this->getSeed();
|
|
|
|
|
|
for (Node& n : allNodes())
|
|
|
|
|
|
n.hash = isStart(n)? seed : 0;
|
|
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-11-17 02:15:50 +01:00
|
|
|
|
|
2023-11-16 18:42:36 +01:00
|
|
|
|
/* ===== Operators ===== */
|
|
|
|
|
|
|
|
|
|
|
|
std::string
|
|
|
|
|
|
generateTopologyDOT()
|
|
|
|
|
|
{
|
|
|
|
|
|
using namespace dot;
|
|
|
|
|
|
|
|
|
|
|
|
Section nodes("Nodes");
|
|
|
|
|
|
Section layers("Layers");
|
|
|
|
|
|
Section topology("Topology");
|
|
|
|
|
|
|
|
|
|
|
|
// Styles to distinguish the computation nodes
|
|
|
|
|
|
Code BOTTOM{"shape=doublecircle"};
|
|
|
|
|
|
Code SEED {"shape=circle"};
|
|
|
|
|
|
Code TOP {"shape=box, style=rounded"};
|
|
|
|
|
|
Code DEFAULT{};
|
|
|
|
|
|
|
|
|
|
|
|
// prepare time-level zero
|
|
|
|
|
|
size_t level(0);
|
2023-11-16 21:38:06 +01:00
|
|
|
|
auto timeLevel = scope(level).rank("min ");
|
2023-11-16 18:42:36 +01:00
|
|
|
|
|
|
|
|
|
|
for (Node& n : allNodes())
|
|
|
|
|
|
{
|
|
|
|
|
|
size_t i = nodeID(n);
|
2023-12-09 03:13:48 +01:00
|
|
|
|
string tag{toString(i)+": "+showHashLSB(n.hash)};
|
|
|
|
|
|
if (n.weight) tag +="."+toString(n.weight);
|
|
|
|
|
|
nodes += node(i).label(tag)
|
2023-11-16 18:42:36 +01:00
|
|
|
|
.style(i==0 ? BOTTOM
|
|
|
|
|
|
:isnil(n.pred)? SEED
|
|
|
|
|
|
:isnil(n.succ)? TOP
|
|
|
|
|
|
: DEFAULT);
|
|
|
|
|
|
for (Node* suc : n.succ)
|
|
|
|
|
|
topology += connect (i, nodeID(*suc));
|
|
|
|
|
|
|
|
|
|
|
|
if (level != n.level)
|
|
|
|
|
|
{// switch to next time-level
|
2023-11-16 21:38:06 +01:00
|
|
|
|
layers += timeLevel;
|
2023-11-16 18:42:36 +01:00
|
|
|
|
++level;
|
|
|
|
|
|
ENSURE (level == n.level);
|
|
|
|
|
|
timeLevel = scope(level).rank("same");
|
|
|
|
|
|
}
|
|
|
|
|
|
timeLevel.add (node(i));
|
|
|
|
|
|
}
|
2023-11-16 21:38:06 +01:00
|
|
|
|
layers += timeLevel; // close last layer
|
2023-11-16 18:42:36 +01:00
|
|
|
|
|
|
|
|
|
|
// combine and render collected definitions as DOT-code
|
|
|
|
|
|
return digraph (nodes, layers, topology);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
TestChainLoad&&
|
|
|
|
|
|
printTopologyDOT()
|
|
|
|
|
|
{
|
|
|
|
|
|
cout << "───═══───═══───═══───═══───═══───═══───═══───═══───═══───═══───\n"
|
|
|
|
|
|
<< generateTopologyDOT()
|
|
|
|
|
|
<< "───═══───═══───═══───═══───═══───═══───═══───═══───═══───═══───"
|
|
|
|
|
|
<< endl;
|
|
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-11-28 03:03:55 +01:00
|
|
|
|
|
2023-12-11 22:55:11 +01:00
|
|
|
|
/**
|
|
|
|
|
|
* Conduct a number of benchmark runs over processing the Graph synchronously.
|
|
|
|
|
|
* @return runtime time in microseconds
|
|
|
|
|
|
* @remark can be used as reference point to judge Scheduler performance;
|
|
|
|
|
|
* - additional parallelisation could be exploited: ∅w / floor(∅w/concurrency)
|
|
|
|
|
|
* - but the Scheduler also adds overhead and dispatch leeway
|
|
|
|
|
|
*/
|
|
|
|
|
|
double
|
|
|
|
|
|
calcRuntimeReference(microseconds timeBase =LOAD_DEFAULT_TIME
|
|
|
|
|
|
,size_t sizeBase =0
|
|
|
|
|
|
,size_t repeatCnt=GRAPH_BENCHMARK_RUNS
|
|
|
|
|
|
)
|
|
|
|
|
|
{
|
|
|
|
|
|
return microBenchmark ([&]{ performGraphSynchronously(timeBase,sizeBase); }
|
|
|
|
|
|
,repeatCnt)
|
|
|
|
|
|
.first; // ∅ runtime in µs
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/** Emulate complete graph processing in a single threaded loop. */
|
|
|
|
|
|
TestChainLoad&& performGraphSynchronously(microseconds timeBase =LOAD_DEFAULT_TIME
|
|
|
|
|
|
,size_t sizeBase =0);
|
|
|
|
|
|
|
|
|
|
|
|
TestChainLoad&&
|
|
|
|
|
|
printRuntimeReference(microseconds timeBase =LOAD_DEFAULT_TIME
|
|
|
|
|
|
,size_t sizeBase =0
|
|
|
|
|
|
,size_t repeatCnt=GRAPH_BENCHMARK_RUNS
|
|
|
|
|
|
)
|
|
|
|
|
|
{
|
|
|
|
|
|
cout << _Fmt{"runtime ∅(%d) = %6.2fms (single-threaded)\n"}
|
|
|
|
|
|
% repeatCnt
|
|
|
|
|
|
% (1e-3 * calcRuntimeReference(timeBase,sizeBase,repeatCnt))
|
|
|
|
|
|
<< "───═══───═══───═══───═══───═══───═══───═══───═══───═══───═══───"
|
|
|
|
|
|
<< endl;
|
|
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-12-28 23:56:52 +01:00
|
|
|
|
/** calculate node weights aggregated per level */
|
|
|
|
|
|
auto
|
|
|
|
|
|
allLevelWeights()
|
|
|
|
|
|
{
|
2023-12-30 23:40:13 +01:00
|
|
|
|
return allNodes()
|
|
|
|
|
|
.groupedBy([](Node& n){ return n.level; }
|
2023-12-31 21:59:16 +01:00
|
|
|
|
,[this](LevelWeight& lw, Node const& n)
|
2023-12-30 23:40:13 +01:00
|
|
|
|
{
|
|
|
|
|
|
lw.level = n.level;
|
|
|
|
|
|
lw.weight += n.weight;
|
2023-12-31 21:59:16 +01:00
|
|
|
|
lw.endidx = nodeID(n);
|
2023-12-30 23:40:13 +01:00
|
|
|
|
++lw.nodes;
|
|
|
|
|
|
}
|
|
|
|
|
|
);
|
2023-12-28 23:56:52 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-31 02:04:23 +01:00
|
|
|
|
/** sequence of the summed compounded weight factors _after_ each level */
|
|
|
|
|
|
auto
|
|
|
|
|
|
levelScheduleSequence (uint concurrency =1)
|
|
|
|
|
|
{
|
|
|
|
|
|
return allLevelWeights()
|
|
|
|
|
|
.transform([schedule=0.0, concurrency]
|
|
|
|
|
|
(LevelWeight const& lw) mutable
|
|
|
|
|
|
{
|
|
|
|
|
|
schedule += computeWeightFactor (lw, concurrency);
|
|
|
|
|
|
return schedule;
|
|
|
|
|
|
});
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-28 23:56:52 +01:00
|
|
|
|
|
|
|
|
|
|
|
2023-11-28 03:03:55 +01:00
|
|
|
|
Statistic computeGraphStatistics();
|
2023-11-28 16:25:22 +01:00
|
|
|
|
TestChainLoad&& printTopologyStatistics();
|
2023-11-28 03:03:55 +01:00
|
|
|
|
|
2023-12-05 23:53:42 +01:00
|
|
|
|
class ScheduleCtx;
|
|
|
|
|
|
friend class ScheduleCtx; // accesses raw storage array
|
|
|
|
|
|
|
|
|
|
|
|
ScheduleCtx setupSchedule (Scheduler& scheduler);
|
2023-11-10 23:54:47 +01:00
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-11-11 23:23:23 +01:00
|
|
|
|
|
2023-11-23 20:55:30 +01:00
|
|
|
|
/**
|
|
|
|
|
|
* Policy/Binding for generation of [random parameters](\ref TestChainLoad::Param)
|
|
|
|
|
|
* by [»drawing«](\ref random-draw.hpp) based on the [node-hash](\ref TestChainLoad::Node).
|
|
|
|
|
|
* Notably this policy template maps the ways to spell out [»Ctrl rules«](\ref TestChainLoad::Rule)
|
|
|
|
|
|
* to configure the probability profile of the topology parameters _seeding, expansion, reduction
|
|
|
|
|
|
* and pruning._ The RandomDraw component used to implement those rules provides a builder-DSL
|
|
|
|
|
|
* and accepts λ-bindings in various forms to influence mapping of Node hash into result parameters.
|
|
|
|
|
|
*/
|
2023-12-04 03:15:46 +01:00
|
|
|
|
template<size_t maxFan>
|
|
|
|
|
|
class TestChainLoad<maxFan>::NodeControlBinding
|
2023-11-26 03:04:59 +01:00
|
|
|
|
: public std::function<Param(Node*)>
|
2023-11-23 20:55:30 +01:00
|
|
|
|
{
|
2023-11-26 03:04:59 +01:00
|
|
|
|
protected:
|
2023-11-23 20:55:30 +01:00
|
|
|
|
/** by default use Node-hash directly as source of randomness */
|
|
|
|
|
|
static size_t
|
|
|
|
|
|
defaultSrc (Node* node)
|
|
|
|
|
|
{
|
|
|
|
|
|
return node? node->hash:0;
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static size_t
|
|
|
|
|
|
level (Node* node)
|
|
|
|
|
|
{
|
|
|
|
|
|
return node? node->level:0;
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static double
|
|
|
|
|
|
guessHeight (size_t level)
|
2023-12-04 03:15:46 +01:00
|
|
|
|
{ // heuristic guess for a »fully stable state«
|
|
|
|
|
|
double expectedHeight = 2*maxFan;
|
2023-11-23 20:55:30 +01:00
|
|
|
|
return level / expectedHeight;
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/** Adaptor to handle further mapping functions */
|
|
|
|
|
|
template<class SIG>
|
|
|
|
|
|
struct Adaptor
|
|
|
|
|
|
{
|
|
|
|
|
|
static_assert (not sizeof(SIG), "Unable to adapt given functor.");
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
/** allow simple rules directly manipulating the hash value */
|
|
|
|
|
|
template<typename RES>
|
|
|
|
|
|
struct Adaptor<RES(size_t)>
|
|
|
|
|
|
{
|
|
|
|
|
|
template<typename FUN>
|
|
|
|
|
|
static auto
|
|
|
|
|
|
build (FUN&& fun)
|
|
|
|
|
|
{
|
|
|
|
|
|
return [functor=std::forward<FUN>(fun)]
|
|
|
|
|
|
(Node* node) -> _FunRet<FUN>
|
|
|
|
|
|
{
|
|
|
|
|
|
return functor (defaultSrc (node));
|
|
|
|
|
|
};
|
|
|
|
|
|
}
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
/** allow rules additionally involving the height of the graph,
|
2023-12-04 03:15:46 +01:00
|
|
|
|
* which also represents time. 1.0 refers to _stable state generation,_
|
|
|
|
|
|
* guessed as height Level ≡ 2·maxFan . */
|
2023-11-23 20:55:30 +01:00
|
|
|
|
template<typename RES>
|
|
|
|
|
|
struct Adaptor<RES(size_t,double)>
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
|
|
|
|
template<typename FUN>
|
|
|
|
|
|
static auto
|
|
|
|
|
|
build (FUN&& fun)
|
|
|
|
|
|
{
|
|
|
|
|
|
return [functor=std::forward<FUN>(fun)]
|
|
|
|
|
|
(Node* node) -> _FunRet<FUN>
|
|
|
|
|
|
{
|
|
|
|
|
|
return functor (defaultSrc (node)
|
|
|
|
|
|
,guessHeight(level(node)));
|
|
|
|
|
|
};
|
|
|
|
|
|
}
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
/** rules may also build solely on the (guessed) height. */
|
|
|
|
|
|
template<typename RES>
|
|
|
|
|
|
struct Adaptor<RES(double)>
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
|
|
|
|
template<typename FUN>
|
|
|
|
|
|
static auto
|
|
|
|
|
|
build (FUN&& fun)
|
|
|
|
|
|
{
|
|
|
|
|
|
return [functor=std::forward<FUN>(fun)]
|
|
|
|
|
|
(Node* node) -> _FunRet<FUN>
|
|
|
|
|
|
{
|
|
|
|
|
|
return functor (guessHeight(level(node)));
|
|
|
|
|
|
};
|
|
|
|
|
|
}
|
|
|
|
|
|
};
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-11-28 02:18:38 +01:00
|
|
|
|
|
2023-11-28 16:25:22 +01:00
|
|
|
|
/* ========= Graph Statistics Evaluation ========= */
|
|
|
|
|
|
|
2023-11-29 02:58:55 +01:00
|
|
|
|
struct StatKey
|
|
|
|
|
|
: std::pair<size_t,string>
|
|
|
|
|
|
{
|
|
|
|
|
|
using std::pair<size_t,string>::pair;
|
|
|
|
|
|
operator size_t const&() const { return this->first; }
|
|
|
|
|
|
operator string const&() const { return this->second;}
|
|
|
|
|
|
};
|
|
|
|
|
|
const StatKey STAT_NODE{0,"node"}; ///< all nodes
|
|
|
|
|
|
const StatKey STAT_SEED{1,"seed"}; ///< seed node
|
|
|
|
|
|
const StatKey STAT_EXIT{2,"exit"}; ///< exit node
|
|
|
|
|
|
const StatKey STAT_INNR{3,"innr"}; ///< inner node
|
|
|
|
|
|
const StatKey STAT_FORK{4,"fork"}; ///< forking node
|
|
|
|
|
|
const StatKey STAT_JOIN{5,"join"}; ///< joining node
|
|
|
|
|
|
const StatKey STAT_LINK{6,"link"}; ///< 1:1 linking node
|
|
|
|
|
|
const StatKey STAT_KNOT{7,"knot"}; ///< knot (joins and forks)
|
2023-12-12 20:51:31 +01:00
|
|
|
|
const StatKey STAT_WGHT{8,"wght"}; ///< node weight
|
2023-11-28 02:18:38 +01:00
|
|
|
|
|
2023-12-12 20:51:31 +01:00
|
|
|
|
const std::array KEYS = {STAT_NODE,STAT_SEED,STAT_EXIT,STAT_INNR,STAT_FORK,STAT_JOIN,STAT_LINK,STAT_KNOT,STAT_WGHT};
|
2023-11-28 02:18:38 +01:00
|
|
|
|
const uint CAT = KEYS.size();
|
2023-11-28 22:46:59 +01:00
|
|
|
|
const uint IDX_SEED = 1; // index of STAT_SEED
|
2023-11-28 02:18:38 +01:00
|
|
|
|
|
2023-11-28 03:03:55 +01:00
|
|
|
|
namespace {
|
|
|
|
|
|
template<class NOD>
|
|
|
|
|
|
inline auto
|
2023-12-04 16:29:57 +01:00
|
|
|
|
prepareEvaluations()
|
2023-11-28 03:03:55 +01:00
|
|
|
|
{
|
|
|
|
|
|
return std::array<std::function<uint(NOD&)>, CAT>
|
2023-11-28 22:46:59 +01:00
|
|
|
|
{ [](NOD& ){ return 1; }
|
|
|
|
|
|
, [](NOD& n){ return isStart(n);}
|
2023-11-28 03:03:55 +01:00
|
|
|
|
, [](NOD& n){ return isExit(n); }
|
|
|
|
|
|
, [](NOD& n){ return isInner(n);}
|
|
|
|
|
|
, [](NOD& n){ return isFork(n); }
|
|
|
|
|
|
, [](NOD& n){ return isJoin(n); }
|
|
|
|
|
|
, [](NOD& n){ return isLink(n); }
|
|
|
|
|
|
, [](NOD& n){ return isKnot(n); }
|
2023-12-12 20:51:31 +01:00
|
|
|
|
, [](NOD& n){ return n.weight; }
|
2023-11-28 03:03:55 +01:00
|
|
|
|
};
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-11-28 16:25:22 +01:00
|
|
|
|
using VecU = std::vector<uint>;
|
2023-11-28 02:18:38 +01:00
|
|
|
|
using LevelSums = std::array<uint, CAT>;
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
|
* Distribution indicators for one kind of evaluation.
|
|
|
|
|
|
* Evaluations over the kind of node are collected per (time)level.
|
|
|
|
|
|
* This data is then counted, averaged and weighted.
|
|
|
|
|
|
*/
|
|
|
|
|
|
struct Indicator
|
|
|
|
|
|
{
|
|
|
|
|
|
VecU data{};
|
|
|
|
|
|
uint cnt{0}; ///< global sum over all levels
|
|
|
|
|
|
double frac{0}; ///< fraction of all nodes
|
2023-11-29 02:58:55 +01:00
|
|
|
|
double pS {0}; ///< average per segment
|
2023-11-28 16:25:22 +01:00
|
|
|
|
double pL {0}; ///< average per level
|
2023-11-28 02:18:38 +01:00
|
|
|
|
double pLW{0}; ///< average per level and level-width
|
2023-11-28 16:25:22 +01:00
|
|
|
|
double cL {0}; ///< weight centre level for this indicator
|
2023-11-28 02:18:38 +01:00
|
|
|
|
double cLW{0}; ///< weight centre level width-reduced
|
2023-11-28 22:46:59 +01:00
|
|
|
|
double sL {0}; ///< weight centre on subgraph
|
|
|
|
|
|
double sLW{0}; ///< weight centre on subgraph width-reduced
|
2023-11-28 02:18:38 +01:00
|
|
|
|
|
|
|
|
|
|
void
|
2023-11-28 22:46:59 +01:00
|
|
|
|
addPoint (uint levelID, uint sublevelID, uint width, uint items)
|
2023-11-28 02:18:38 +01:00
|
|
|
|
{
|
2023-11-28 16:25:22 +01:00
|
|
|
|
REQUIRE (levelID == data.size()); // ID is zero based
|
|
|
|
|
|
REQUIRE (width > 0);
|
2023-11-28 02:18:38 +01:00
|
|
|
|
data.push_back (items);
|
|
|
|
|
|
cnt += items;
|
2023-11-29 02:58:55 +01:00
|
|
|
|
pS += items;
|
2023-11-28 02:18:38 +01:00
|
|
|
|
pL += items;
|
|
|
|
|
|
pLW += items / double(width);
|
2023-11-28 16:25:22 +01:00
|
|
|
|
cL += levelID * items;
|
|
|
|
|
|
cLW += levelID * items/double(width);
|
2023-11-28 22:46:59 +01:00
|
|
|
|
sL += sublevelID * items;
|
|
|
|
|
|
sLW += sublevelID * items/double(width);
|
2023-11-28 02:18:38 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
void
|
2023-11-29 02:58:55 +01:00
|
|
|
|
closeAverages (uint nodes, uint levels, uint segments, double avgheight)
|
2023-11-28 02:18:38 +01:00
|
|
|
|
{
|
2023-11-28 22:46:59 +01:00
|
|
|
|
REQUIRE (levels == data.size());
|
2023-11-28 02:18:38 +01:00
|
|
|
|
REQUIRE (levels > 0);
|
|
|
|
|
|
frac = cnt / double(nodes);
|
2023-11-28 16:25:22 +01:00
|
|
|
|
cL = pL? cL/pL :0; // weighted averages: normalise to weight sum
|
|
|
|
|
|
cLW = pLW? cLW/pLW :0;
|
2023-11-28 22:46:59 +01:00
|
|
|
|
sL = pL? sL/pL :0;
|
|
|
|
|
|
sLW = pLW? sLW/pLW :0;
|
2023-11-29 02:58:55 +01:00
|
|
|
|
pS /= segments; // simple averages : normalise to number of levels
|
2023-11-28 02:18:38 +01:00
|
|
|
|
pL /= levels; // simple averages : normalise to number of levels
|
|
|
|
|
|
pLW /= levels;
|
2023-11-28 22:46:59 +01:00
|
|
|
|
cL /= levels-1; // weight centres : as fraction of maximum level-ID
|
|
|
|
|
|
cLW /= levels-1;
|
|
|
|
|
|
ASSERT (avgheight >= 1.0);
|
|
|
|
|
|
if (avgheight > 1.0)
|
|
|
|
|
|
{ // likewise for weight centres relative to subgraph
|
|
|
|
|
|
sL /= avgheight-1; // height is 1-based, while the contribution was 0-based
|
|
|
|
|
|
sLW /= avgheight-1;
|
|
|
|
|
|
}
|
|
|
|
|
|
else
|
|
|
|
|
|
sL = sLW = 0.5;
|
2023-11-28 02:18:38 +01:00
|
|
|
|
}
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
|
* Statistic data calculated for a given chain-load topology
|
|
|
|
|
|
*/
|
2023-11-28 03:03:55 +01:00
|
|
|
|
struct Statistic
|
2023-11-28 02:18:38 +01:00
|
|
|
|
{
|
|
|
|
|
|
uint nodes{0};
|
|
|
|
|
|
uint levels{0};
|
2023-11-28 22:46:59 +01:00
|
|
|
|
uint segments{1};
|
|
|
|
|
|
uint maxheight{0};
|
|
|
|
|
|
double avgheight{0};
|
2023-11-28 02:18:38 +01:00
|
|
|
|
VecU width{};
|
2023-11-28 22:46:59 +01:00
|
|
|
|
VecU sublevel{};
|
2023-11-28 02:18:38 +01:00
|
|
|
|
|
2023-11-28 16:25:22 +01:00
|
|
|
|
std::array<Indicator, CAT> indicators;
|
2023-11-28 02:18:38 +01:00
|
|
|
|
|
|
|
|
|
|
explicit
|
2023-11-28 03:03:55 +01:00
|
|
|
|
Statistic (uint lvls)
|
2023-11-28 02:18:38 +01:00
|
|
|
|
: nodes{0}
|
2023-11-28 16:25:22 +01:00
|
|
|
|
, levels{0}
|
2023-11-28 02:18:38 +01:00
|
|
|
|
{
|
2023-11-28 16:25:22 +01:00
|
|
|
|
reserve (lvls);
|
2023-11-28 02:18:38 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
void
|
2023-11-28 22:46:59 +01:00
|
|
|
|
addPoint (uint levelWidth, uint sublevelID, LevelSums& particulars)
|
2023-11-28 02:18:38 +01:00
|
|
|
|
{
|
|
|
|
|
|
levels += 1;
|
|
|
|
|
|
nodes += levelWidth;
|
|
|
|
|
|
width.push_back (levelWidth);
|
2023-11-28 22:46:59 +01:00
|
|
|
|
sublevel.push_back (sublevelID);
|
2023-11-28 02:18:38 +01:00
|
|
|
|
ASSERT (levels == width.size());
|
2023-11-28 16:25:22 +01:00
|
|
|
|
ASSERT (0 < levels);
|
|
|
|
|
|
ASSERT (0 < levelWidth);
|
2023-11-28 02:18:38 +01:00
|
|
|
|
for (uint i=0; i< CAT; ++i)
|
2023-11-28 22:46:59 +01:00
|
|
|
|
indicators[i].addPoint (levels-1, sublevelID, levelWidth, particulars[i]);
|
2023-11-28 02:18:38 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
void
|
2023-11-28 22:46:59 +01:00
|
|
|
|
closeAverages (uint segs, uint maxSublevelID)
|
2023-11-28 02:18:38 +01:00
|
|
|
|
{
|
2023-11-28 22:46:59 +01:00
|
|
|
|
segments = segs;
|
|
|
|
|
|
maxheight = maxSublevelID + 1;
|
|
|
|
|
|
avgheight = levels / double(segments);
|
2023-11-28 16:25:22 +01:00
|
|
|
|
for (uint i=0; i< CAT; ++i)
|
2023-11-29 02:58:55 +01:00
|
|
|
|
indicators[i].closeAverages (nodes,levels,segments,avgheight);
|
2023-11-28 02:18:38 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
private:
|
|
|
|
|
|
void
|
2023-11-28 16:25:22 +01:00
|
|
|
|
reserve (uint lvls)
|
2023-11-28 02:18:38 +01:00
|
|
|
|
{
|
2023-11-28 16:25:22 +01:00
|
|
|
|
width.reserve (lvls);
|
2023-11-28 22:46:59 +01:00
|
|
|
|
sublevel.reserve(lvls);
|
2023-11-28 16:25:22 +01:00
|
|
|
|
for (uint i=0; i< CAT; ++i)
|
2023-11-28 02:18:38 +01:00
|
|
|
|
{
|
2023-11-28 16:25:22 +01:00
|
|
|
|
indicators[i] = Indicator{};
|
|
|
|
|
|
indicators[i].data.reserve(lvls);
|
2023-11-28 02:18:38 +01:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-11-28 16:25:22 +01:00
|
|
|
|
/**
|
|
|
|
|
|
* Operator on TestChainLoad to evaluate current graph connectivity.
|
|
|
|
|
|
* In a pass over the internal storage, all nodes are classified
|
|
|
|
|
|
* and accounted into a set of categories, thereby evaluating
|
|
|
|
|
|
* - the overall number of nodes and levels generated
|
|
|
|
|
|
* - the number of nodes in each level (termed _level width_)
|
|
|
|
|
|
* - the fraction of overall nodes falling into each category
|
|
|
|
|
|
* - the average number of category members over the levels
|
|
|
|
|
|
* - the density of members, normalised over level width
|
|
|
|
|
|
* - the weight centre of this category members
|
|
|
|
|
|
* - the weight centre of according to density
|
|
|
|
|
|
*/
|
2023-12-04 03:15:46 +01:00
|
|
|
|
template<size_t maxFan>
|
2023-11-28 03:03:55 +01:00
|
|
|
|
inline Statistic
|
2023-12-04 03:15:46 +01:00
|
|
|
|
TestChainLoad<maxFan>::computeGraphStatistics()
|
2023-11-28 03:03:55 +01:00
|
|
|
|
{
|
|
|
|
|
|
auto totalLevels = uint(topLevel());
|
2023-12-04 16:29:57 +01:00
|
|
|
|
auto classify = prepareEvaluations<Node>();
|
2023-11-28 03:03:55 +01:00
|
|
|
|
Statistic stat(totalLevels);
|
|
|
|
|
|
LevelSums particulars{0};
|
2023-11-28 22:46:59 +01:00
|
|
|
|
size_t level{0},
|
|
|
|
|
|
sublevel{0},
|
|
|
|
|
|
maxsublevel{0};
|
|
|
|
|
|
size_t segs{0};
|
2023-11-28 03:03:55 +01:00
|
|
|
|
uint width{0};
|
2023-11-28 22:46:59 +01:00
|
|
|
|
auto detectSubgraphs = [&]{ // to be called when a level is complete
|
|
|
|
|
|
if (width==1 and particulars[IDX_SEED]==1)
|
|
|
|
|
|
{ // previous level actually started new subgraph
|
|
|
|
|
|
sublevel = 0;
|
|
|
|
|
|
++segs;
|
|
|
|
|
|
}
|
|
|
|
|
|
else
|
|
|
|
|
|
maxsublevel = max (sublevel,maxsublevel);
|
|
|
|
|
|
};
|
2023-11-28 03:03:55 +01:00
|
|
|
|
|
|
|
|
|
|
for (Node& node : allNodes())
|
|
|
|
|
|
{
|
|
|
|
|
|
if (level != node.level)
|
2023-11-28 22:46:59 +01:00
|
|
|
|
{// Level completed...
|
|
|
|
|
|
detectSubgraphs();
|
|
|
|
|
|
// record statistics for previous level
|
|
|
|
|
|
stat.addPoint (width, sublevel, particulars);
|
2023-11-28 03:03:55 +01:00
|
|
|
|
// switch to next time-level
|
|
|
|
|
|
++level;
|
2023-11-28 22:46:59 +01:00
|
|
|
|
++sublevel;
|
2023-11-28 03:03:55 +01:00
|
|
|
|
ENSURE (level == node.level);
|
|
|
|
|
|
particulars = LevelSums{0};
|
|
|
|
|
|
width = 0;
|
|
|
|
|
|
}
|
2023-11-28 16:25:22 +01:00
|
|
|
|
// classify and account..
|
|
|
|
|
|
++width;
|
|
|
|
|
|
for (uint i=0; i<CAT; ++i)
|
|
|
|
|
|
particulars[i] += classify[i](node);
|
2023-11-28 03:03:55 +01:00
|
|
|
|
}
|
|
|
|
|
|
ENSURE (level = topLevel());
|
2023-11-28 22:46:59 +01:00
|
|
|
|
detectSubgraphs();
|
|
|
|
|
|
stat.addPoint (width, sublevel, particulars);
|
|
|
|
|
|
stat.closeAverages (segs, maxsublevel);
|
2023-11-28 03:03:55 +01:00
|
|
|
|
return stat;
|
|
|
|
|
|
}
|
2023-11-28 16:25:22 +01:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-11-28 22:46:59 +01:00
|
|
|
|
/**
|
|
|
|
|
|
* Print a tabular summary of graph characteristics
|
|
|
|
|
|
* @remark explanation of indicators
|
|
|
|
|
|
* - »node« : accounting for all nodes
|
|
|
|
|
|
* - »seed« : seed nodes start a new subgraph or side chain
|
|
|
|
|
|
* - »exit« : exit nodes produce output and have no successor
|
|
|
|
|
|
* - »innr« : inner nodes have both predecessors and successors
|
|
|
|
|
|
* - »fork« : a node linked to more than one successor
|
|
|
|
|
|
* - »join« : a node consuming data from more than one predecessor
|
|
|
|
|
|
* - »link« : a node in a linear processing chain; one input, one output
|
|
|
|
|
|
* - »LEVL« : the overall number of distinct _time levels_ in the graph
|
|
|
|
|
|
* - »SEGS« : the number of completely disjoint partial subgraphs
|
|
|
|
|
|
* - »knot« : a node which both joins data and forks out to multiple successors
|
|
|
|
|
|
* - `frac` : the percentage of overall nodes falling into this category
|
2023-12-01 23:43:00 +01:00
|
|
|
|
* - `∅pS` : averaged per Segment (warning: see below)
|
2023-11-28 22:46:59 +01:00
|
|
|
|
* - `∅pL` : averaged per Level
|
|
|
|
|
|
* - `∅pLW` : count normalised to the width at that level and then averaged per Level
|
|
|
|
|
|
* - `γL◆` : weight centre of this kind of node, relative to the overall graph
|
|
|
|
|
|
* - `γLW◆` : the same, but using the level-width-normalised value
|
|
|
|
|
|
* - `γL⬙` : weight centre, but relative to the current subgraph or segment
|
|
|
|
|
|
* - `γLW⬙` : same but using level-width-normalised value
|
|
|
|
|
|
* Together, these values indicates how the simulated processing load
|
|
|
|
|
|
* is structured over time, assuming that the _»Levels« are processed consecutively_
|
|
|
|
|
|
* in temporal order. The graph can unfold or contract over time, and thus nodes can
|
|
|
|
|
|
* be clustered irregularly, which can be seen from the weight centres; for that
|
|
|
|
|
|
* reason, the width-normalised variants of the indicators are also accounted for,
|
|
|
|
|
|
* since a wider graph also implies that there are more nodes of each kind per level,
|
|
|
|
|
|
* even while the actual density of this kind did not increase.
|
2023-12-01 23:43:00 +01:00
|
|
|
|
* @warning no comprehensive connectivity analysis is performed, and thus there is
|
|
|
|
|
|
* *no reliable indication of subgraphs*. The `SEGS` statistics may be misleading,
|
|
|
|
|
|
* since these count only completely severed and restarted graphs.
|
2023-11-28 22:46:59 +01:00
|
|
|
|
*/
|
2023-12-04 03:15:46 +01:00
|
|
|
|
template<size_t maxFan>
|
|
|
|
|
|
inline TestChainLoad<maxFan>&&
|
|
|
|
|
|
TestChainLoad<maxFan>::printTopologyStatistics()
|
2023-11-28 16:25:22 +01:00
|
|
|
|
{
|
2023-11-29 02:58:55 +01:00
|
|
|
|
cout << "INDI: cnt frac ∅pS ∅pL ∅pLW γL◆ γLW◆ γL⬙ γLW⬙\n";
|
|
|
|
|
|
_Fmt line{"%4s: %3d %3.0f%% %5.1f %5.2f %4.2f %4.2f %4.2f %4.2f %4.2f\n"};
|
2023-11-28 16:25:22 +01:00
|
|
|
|
Statistic stat = computeGraphStatistics();
|
|
|
|
|
|
for (uint i=0; i< CAT; ++i)
|
|
|
|
|
|
{
|
|
|
|
|
|
Indicator& indi = stat.indicators[i];
|
|
|
|
|
|
cout << line % KEYS[i]
|
|
|
|
|
|
% indi.cnt
|
|
|
|
|
|
% (indi.frac*100)
|
2023-11-29 02:58:55 +01:00
|
|
|
|
% indi.pS
|
2023-11-28 16:25:22 +01:00
|
|
|
|
% indi.pL
|
|
|
|
|
|
% indi.pLW
|
|
|
|
|
|
% indi.cL
|
|
|
|
|
|
% indi.cLW
|
2023-11-28 22:46:59 +01:00
|
|
|
|
% indi.sL
|
|
|
|
|
|
% indi.sLW
|
2023-11-28 16:25:22 +01:00
|
|
|
|
;
|
|
|
|
|
|
}
|
2023-11-28 22:46:59 +01:00
|
|
|
|
cout << _Fmt{"LEVL: %3d\n"} % stat.levels;
|
|
|
|
|
|
cout << _Fmt{"SEGS: %3d h = ∅%3.1f / max.%2d\n"}
|
|
|
|
|
|
% stat.segments
|
|
|
|
|
|
% stat.avgheight
|
|
|
|
|
|
% stat.maxheight;
|
2023-11-28 16:25:22 +01:00
|
|
|
|
cout << "───═══───═══───═══───═══───═══───═══───═══───═══───═══───═══───"
|
|
|
|
|
|
<< endl;
|
|
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
2023-11-28 03:03:55 +01:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-12-03 23:33:06 +01:00
|
|
|
|
|
|
|
|
|
|
|
2023-12-09 03:13:48 +01:00
|
|
|
|
/* ========= Configurable Computational Load ========= */
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
|
* A calibratable CPU load to be invoked from a node job functor.
|
2023-12-11 19:42:23 +01:00
|
|
|
|
* Two distinct methods for load generation are provided
|
|
|
|
|
|
* - tight loop with arithmetics in register
|
|
|
|
|
|
* - repeatedly accessing and adding memory marked as `volatile`
|
|
|
|
|
|
* The `timeBase` multiplied with the given `scaleStep´ determines
|
|
|
|
|
|
* the actual run time. When using the _memory method_ (`useAllocation`),
|
|
|
|
|
|
* a heap block of `staleStep*sizeBase` is used, and the number of
|
|
|
|
|
|
* repetitions is chosen such as to match the given timing goal.
|
|
|
|
|
|
* @note since performance depends on the platform, it is mandatory
|
|
|
|
|
|
* to invoke the [self calibration](\ref ComputationalLoad::calibrate)
|
|
|
|
|
|
* at least once prior to use. Performing the calibration with default
|
|
|
|
|
|
* base settings is acceptable, since mostly the overall expense is
|
|
|
|
|
|
* growing linearly; obviously the calibration is more precisely
|
|
|
|
|
|
* however when using the actual `timeBase` and `sizeBase` of
|
|
|
|
|
|
* the intended usage. The calibration watches processing
|
|
|
|
|
|
* speed in a microbenchmark with `LOAD_BENCHMARK_RUNS`
|
|
|
|
|
|
* repetitions; the result is stored in a static
|
|
|
|
|
|
* variable and can thus be reused.
|
2023-12-09 03:13:48 +01:00
|
|
|
|
*/
|
|
|
|
|
|
class ComputationalLoad
|
2023-12-11 22:55:11 +01:00
|
|
|
|
: util::MoveAssign
|
2023-12-09 03:13:48 +01:00
|
|
|
|
{
|
2023-12-10 23:13:05 +01:00
|
|
|
|
using Sink = volatile size_t;
|
|
|
|
|
|
|
|
|
|
|
|
lib::UninitialisedDynBlock<Sink> memBlock_{};
|
2023-12-10 22:09:46 +01:00
|
|
|
|
|
|
|
|
|
|
static double&
|
2023-12-10 23:13:05 +01:00
|
|
|
|
computationSpeed (bool mem) ///< in iterations/µs
|
2023-12-10 22:09:46 +01:00
|
|
|
|
{
|
2023-12-10 23:13:05 +01:00
|
|
|
|
static double cpuSpeed{LOAD_SPEED_BASELINE};
|
|
|
|
|
|
static double memSpeed{LOAD_SPEED_BASELINE};
|
|
|
|
|
|
return mem? memSpeed : cpuSpeed;
|
2023-12-10 22:09:46 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-09 03:13:48 +01:00
|
|
|
|
public:
|
2023-12-11 19:42:23 +01:00
|
|
|
|
microseconds timeBase = LOAD_DEFAULT_TIME;
|
|
|
|
|
|
size_t sizeBase = LOAD_DEFAULT_MEM_SIZE;
|
2023-12-10 22:09:46 +01:00
|
|
|
|
bool useAllocation = false;
|
2023-12-10 19:58:18 +01:00
|
|
|
|
|
2023-12-11 19:42:23 +01:00
|
|
|
|
/** cause a delay by computational load */
|
2023-12-10 19:58:18 +01:00
|
|
|
|
double
|
|
|
|
|
|
invoke (uint scaleStep =1)
|
|
|
|
|
|
{
|
2023-12-11 19:42:23 +01:00
|
|
|
|
if (scaleStep == 0 or timeBase < 1us)
|
|
|
|
|
|
return 0; // disabled
|
2023-12-10 23:13:05 +01:00
|
|
|
|
return useAllocation? benchmarkTime ([this,scaleStep]{ causeMemProcessLoad (scaleStep); })
|
|
|
|
|
|
: benchmarkTime ([this,scaleStep]{ causeComputationLoad(scaleStep); });
|
2023-12-10 19:58:18 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-11 19:42:23 +01:00
|
|
|
|
/** @return averaged runtime in current configuration */
|
2023-12-10 19:58:18 +01:00
|
|
|
|
double
|
|
|
|
|
|
benchmark (uint scaleStep =1)
|
|
|
|
|
|
{
|
2023-12-10 22:09:46 +01:00
|
|
|
|
return microBenchmark ([&]{ invoke(scaleStep);}
|
|
|
|
|
|
,LOAD_BENCHMARK_RUNS)
|
2023-12-11 19:42:23 +01:00
|
|
|
|
.first; // ∅ runtime in µs
|
2023-12-10 19:58:18 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-10 22:09:46 +01:00
|
|
|
|
void
|
2023-12-10 19:58:18 +01:00
|
|
|
|
calibrate()
|
|
|
|
|
|
{
|
2023-12-11 01:42:38 +01:00
|
|
|
|
TRANSIENTLY(useAllocation) = false;
|
|
|
|
|
|
performIncrementalCalibration();
|
|
|
|
|
|
useAllocation = true;
|
|
|
|
|
|
performIncrementalCalibration();
|
2023-12-10 22:09:46 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-11 19:42:23 +01:00
|
|
|
|
void
|
|
|
|
|
|
maybeCalibrate()
|
2023-12-10 22:09:46 +01:00
|
|
|
|
{
|
2023-12-11 19:42:23 +01:00
|
|
|
|
if (not isCalibrated())
|
|
|
|
|
|
calibrate();
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
bool
|
|
|
|
|
|
isCalibrated() const
|
|
|
|
|
|
{
|
|
|
|
|
|
return computationSpeed(false) != LOAD_SPEED_BASELINE;
|
2023-12-10 22:09:46 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
private:
|
|
|
|
|
|
uint64_t
|
|
|
|
|
|
roundsNeeded (uint scaleStep)
|
|
|
|
|
|
{
|
|
|
|
|
|
auto desiredMicros = scaleStep*timeBase.count();
|
2023-12-10 23:13:05 +01:00
|
|
|
|
return uint64_t(desiredMicros*computationSpeed(useAllocation));
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
auto
|
|
|
|
|
|
allocNeeded (uint scaleStep)
|
|
|
|
|
|
{
|
|
|
|
|
|
auto cnt = roundsNeeded(scaleStep);
|
2023-12-11 19:42:23 +01:00
|
|
|
|
auto siz = max (scaleStep * sizeBase, 1u);
|
2023-12-11 01:42:38 +01:00
|
|
|
|
auto rep = max (cnt/siz, 1u);
|
2023-12-10 23:13:05 +01:00
|
|
|
|
// increase size to fit
|
2023-12-11 01:42:38 +01:00
|
|
|
|
siz = cnt / rep;
|
|
|
|
|
|
return make_pair (siz,rep);
|
2023-12-10 22:09:46 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
void
|
|
|
|
|
|
causeComputationLoad (uint scaleStep)
|
|
|
|
|
|
{
|
2023-12-10 23:13:05 +01:00
|
|
|
|
auto round = roundsNeeded (scaleStep);
|
|
|
|
|
|
Sink sink;
|
2023-12-27 23:59:31 +01:00
|
|
|
|
size_t scree{0x55DEAD55};
|
2023-12-10 22:09:46 +01:00
|
|
|
|
for ( ; 0 < round; --round)
|
2023-12-10 23:13:05 +01:00
|
|
|
|
boost::hash_combine (scree,scree);
|
2023-12-10 22:09:46 +01:00
|
|
|
|
sink = scree;
|
|
|
|
|
|
sink++;
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-10 23:13:05 +01:00
|
|
|
|
void
|
|
|
|
|
|
causeMemProcessLoad (uint scaleStep)
|
2023-12-10 22:09:46 +01:00
|
|
|
|
{
|
2023-12-10 23:13:05 +01:00
|
|
|
|
auto [siz,round] = allocNeeded (scaleStep);
|
|
|
|
|
|
memBlock_.allocate(siz);
|
|
|
|
|
|
++*memBlock_.front();
|
|
|
|
|
|
for ( ; 0 < round; --round)
|
|
|
|
|
|
for (size_t i=0; i<memBlock_.size()-1; ++i)
|
|
|
|
|
|
memBlock_[i+1] += memBlock_[i];
|
|
|
|
|
|
++*memBlock_.back();
|
2023-12-10 22:09:46 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
double
|
|
|
|
|
|
determineSpeed()
|
|
|
|
|
|
{
|
|
|
|
|
|
uint step4gauge = 1;
|
|
|
|
|
|
double micros = benchmark (step4gauge);
|
2023-12-10 23:13:05 +01:00
|
|
|
|
auto stepsDone = roundsNeeded (step4gauge);
|
|
|
|
|
|
return stepsDone / micros;
|
2023-12-10 19:58:18 +01:00
|
|
|
|
}
|
2023-12-11 01:42:38 +01:00
|
|
|
|
|
|
|
|
|
|
void
|
|
|
|
|
|
performIncrementalCalibration()
|
|
|
|
|
|
{
|
|
|
|
|
|
double& speed = computationSpeed(useAllocation);
|
|
|
|
|
|
double prev{speed},delta;
|
|
|
|
|
|
do {
|
|
|
|
|
|
speed = determineSpeed();
|
|
|
|
|
|
delta = abs(1.0 - speed/prev);
|
|
|
|
|
|
prev = speed;
|
|
|
|
|
|
}
|
|
|
|
|
|
while (delta > 0.05);
|
|
|
|
|
|
}
|
2023-12-09 03:13:48 +01:00
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-12-12 17:27:21 +01:00
|
|
|
|
/** a »throw-away« render-job */
|
|
|
|
|
|
inline SpecialJobFun
|
|
|
|
|
|
onetimeCrunch (milliseconds runTime)
|
|
|
|
|
|
{ // ensure calibration prior to use
|
|
|
|
|
|
ComputationalLoad().maybeCalibrate();
|
|
|
|
|
|
//
|
|
|
|
|
|
return SpecialJobFun{
|
|
|
|
|
|
[runTime](JobParameter) -> void
|
|
|
|
|
|
{
|
|
|
|
|
|
ComputationalLoad crunch;
|
|
|
|
|
|
crunch.timeBase = runTime;
|
|
|
|
|
|
crunch.invoke();
|
|
|
|
|
|
}};
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-12-11 22:55:11 +01:00
|
|
|
|
/**
|
|
|
|
|
|
* @param timeBase time delay produced by ComputationalLoad at `Node.weight==1`;
|
|
|
|
|
|
* can be set to zero to disable the synthetic processing load on nodes
|
|
|
|
|
|
* @param sizeBase allocation base size used; also causes switch to memory-access based load
|
|
|
|
|
|
* @see TestChainLoad::calcRuntimeReference() for a benchmark based on this processing
|
|
|
|
|
|
*/
|
|
|
|
|
|
template<size_t maxFan>
|
|
|
|
|
|
TestChainLoad<maxFan>&&
|
|
|
|
|
|
TestChainLoad<maxFan>::performGraphSynchronously (microseconds timeBase, size_t sizeBase)
|
|
|
|
|
|
{
|
|
|
|
|
|
ComputationalLoad compuLoad;
|
|
|
|
|
|
compuLoad.timeBase = timeBase;
|
|
|
|
|
|
if (not sizeBase)
|
|
|
|
|
|
{
|
|
|
|
|
|
compuLoad.sizeBase = LOAD_DEFAULT_MEM_SIZE;
|
|
|
|
|
|
compuLoad.useAllocation =false;
|
|
|
|
|
|
}
|
|
|
|
|
|
else
|
|
|
|
|
|
{
|
|
|
|
|
|
compuLoad.sizeBase = sizeBase;
|
|
|
|
|
|
compuLoad.useAllocation =true;
|
|
|
|
|
|
}
|
|
|
|
|
|
compuLoad.maybeCalibrate();
|
|
|
|
|
|
|
|
|
|
|
|
size_t seed = this->getSeed();
|
|
|
|
|
|
for (Node& n : allNodes())
|
|
|
|
|
|
{
|
|
|
|
|
|
n.hash = isStart(n)? seed : 0;
|
|
|
|
|
|
if (n.weight)
|
|
|
|
|
|
compuLoad.invoke (n.weight);
|
|
|
|
|
|
n.calculate();
|
|
|
|
|
|
}
|
|
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-12-09 03:13:48 +01:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-12-03 23:33:06 +01:00
|
|
|
|
/* ========= Render Job generation and Scheduling ========= */
|
|
|
|
|
|
|
2023-12-04 16:29:57 +01:00
|
|
|
|
/**
|
|
|
|
|
|
* Baseclass: JobFunctor to invoke TestChainLoad
|
|
|
|
|
|
*/
|
|
|
|
|
|
class ChainFunctor
|
|
|
|
|
|
: public JobClosure
|
2023-12-03 23:33:06 +01:00
|
|
|
|
{
|
2023-12-04 03:57:04 +01:00
|
|
|
|
|
|
|
|
|
|
static lib::time::Grid&
|
2023-12-04 16:29:57 +01:00
|
|
|
|
testGrid() ///< Meyer's Singleton : a fixed 1-f/s quantiser
|
2023-12-04 03:57:04 +01:00
|
|
|
|
{
|
|
|
|
|
|
static lib::time::FixedFrameQuantiser gridOne{FrameRate::STEP};
|
|
|
|
|
|
return gridOne;
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-04 16:29:57 +01:00
|
|
|
|
/* === JobFunctor Interface === */
|
2023-12-03 23:33:06 +01:00
|
|
|
|
|
2023-12-04 16:29:57 +01:00
|
|
|
|
string diagnostic() const =0;
|
|
|
|
|
|
void invokeJobOperation (JobParameter) =0;
|
2023-12-03 23:33:06 +01:00
|
|
|
|
|
2023-12-04 16:29:57 +01:00
|
|
|
|
JobKind
|
|
|
|
|
|
getJobKind() const
|
2023-12-03 23:33:06 +01:00
|
|
|
|
{
|
2023-12-04 16:29:57 +01:00
|
|
|
|
return TEST_JOB;
|
2023-12-03 23:33:06 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-04 16:29:57 +01:00
|
|
|
|
InvocationInstanceID
|
|
|
|
|
|
buildInstanceID (HashVal) const override
|
2023-12-03 23:33:06 +01:00
|
|
|
|
{
|
2023-12-04 16:29:57 +01:00
|
|
|
|
return InvocationInstanceID();
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
size_t
|
|
|
|
|
|
hashOfInstance (InvocationInstanceID invoKey) const override
|
|
|
|
|
|
{
|
|
|
|
|
|
std::hash<size_t> hashr;
|
|
|
|
|
|
HashVal res = hashr (invoKey.frameNumber);
|
|
|
|
|
|
return res;
|
2023-12-03 23:33:06 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-04 16:29:57 +01:00
|
|
|
|
public:
|
|
|
|
|
|
|
2023-12-03 23:33:06 +01:00
|
|
|
|
/** package the node-index to invoke.
|
|
|
|
|
|
* @note per convention for this test, this info will be
|
|
|
|
|
|
* packaged into the lower word of the InvocationInstanceID
|
|
|
|
|
|
*/
|
2023-12-04 03:57:04 +01:00
|
|
|
|
static InvocationInstanceID
|
2023-12-03 23:33:06 +01:00
|
|
|
|
encodeNodeID (size_t idx)
|
|
|
|
|
|
{
|
|
|
|
|
|
InvocationInstanceID invoKey;
|
|
|
|
|
|
invoKey.code.w1 = idx;
|
|
|
|
|
|
return invoKey;
|
|
|
|
|
|
};
|
|
|
|
|
|
|
2023-12-04 03:57:04 +01:00
|
|
|
|
static size_t
|
|
|
|
|
|
decodeNodeID (InvocationInstanceID invoKey)
|
|
|
|
|
|
{
|
|
|
|
|
|
return size_t(invoKey.code.w1);
|
|
|
|
|
|
};
|
|
|
|
|
|
|
2023-12-03 23:33:06 +01:00
|
|
|
|
static Time
|
|
|
|
|
|
encodeLevel (size_t level)
|
|
|
|
|
|
{
|
2023-12-04 03:57:04 +01:00
|
|
|
|
return Time{testGrid().timeOf (FrameCnt(level))};
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static size_t
|
|
|
|
|
|
decodeLevel (TimeValue nominalTime)
|
|
|
|
|
|
{
|
|
|
|
|
|
return testGrid().gridPoint (nominalTime);
|
2023-12-03 23:33:06 +01:00
|
|
|
|
}
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-12-04 16:29:57 +01:00
|
|
|
|
/**
|
|
|
|
|
|
* Render JobFunctor to invoke the _calculation_ of a single Node.
|
|
|
|
|
|
* The existing Node connectivity is used to retrieve the hash values
|
|
|
|
|
|
* from predecessors — so these are expected to be calculated beforehand.
|
|
|
|
|
|
* For setup, the start of the ChainLoad's Node array is required.
|
|
|
|
|
|
* @tparam maxFan controls expected Node memory layout
|
|
|
|
|
|
*/
|
|
|
|
|
|
template<size_t maxFan>
|
|
|
|
|
|
class RandomChainCalcFunctor
|
|
|
|
|
|
: public ChainFunctor
|
|
|
|
|
|
{
|
|
|
|
|
|
using Node = typename TestChainLoad<maxFan>::Node;
|
|
|
|
|
|
|
|
|
|
|
|
Node* startNode_;
|
2023-12-11 19:42:23 +01:00
|
|
|
|
ComputationalLoad* compuLoad_;
|
2023-12-04 16:29:57 +01:00
|
|
|
|
|
|
|
|
|
|
public:
|
2023-12-11 19:42:23 +01:00
|
|
|
|
RandomChainCalcFunctor(Node& startNode, ComputationalLoad* load =nullptr)
|
2023-12-04 16:29:57 +01:00
|
|
|
|
: startNode_{&startNode}
|
2023-12-11 19:42:23 +01:00
|
|
|
|
, compuLoad_{load}
|
2023-12-04 16:29:57 +01:00
|
|
|
|
{ }
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/** render job invocation to trigger one Node recalculation */
|
|
|
|
|
|
void
|
|
|
|
|
|
invokeJobOperation (JobParameter param) override
|
|
|
|
|
|
{
|
|
|
|
|
|
size_t nodeIdx = decodeNodeID (param.invoKey);
|
|
|
|
|
|
size_t level = decodeLevel (TimeValue{param.nominalTime});
|
|
|
|
|
|
Node& target = startNode_[nodeIdx];
|
|
|
|
|
|
ASSERT (target.level == level);
|
|
|
|
|
|
// invoke the »media calculation«
|
2023-12-11 19:42:23 +01:00
|
|
|
|
if (compuLoad_ and target.weight)
|
|
|
|
|
|
compuLoad_->invoke (target.weight);
|
2023-12-04 16:29:57 +01:00
|
|
|
|
target.calculate();
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
string diagnostic() const override
|
|
|
|
|
|
{
|
|
|
|
|
|
return _Fmt{"ChainCalc(w:%d)◀%s"}
|
|
|
|
|
|
% maxFan
|
|
|
|
|
|
% util::showAddr(startNode_);
|
|
|
|
|
|
}
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
|
* Render JobFunctor to perform chunk wise planning of Node jobs
|
|
|
|
|
|
* to calculate a complete Chain-Load graph step by step.
|
|
|
|
|
|
*/
|
|
|
|
|
|
template<size_t maxFan>
|
|
|
|
|
|
class RandomChainPlanFunctor
|
|
|
|
|
|
: public ChainFunctor
|
|
|
|
|
|
{
|
|
|
|
|
|
using Node = typename TestChainLoad<maxFan>::Node;
|
|
|
|
|
|
|
2023-12-08 23:52:57 +01:00
|
|
|
|
function<void(size_t,size_t)> scheduleCalcJob_;
|
|
|
|
|
|
function<void(Node*,Node*)> markDependency_;
|
|
|
|
|
|
function<void(size_t,size_t,bool)> continuation_;
|
2023-12-04 16:29:57 +01:00
|
|
|
|
|
|
|
|
|
|
size_t maxCnt_;
|
|
|
|
|
|
Node* nodes_;
|
2023-12-04 23:31:21 +01:00
|
|
|
|
|
|
|
|
|
|
size_t currIdx_{0}; // Note: this test-JobFunctor is statefull
|
2023-12-04 16:29:57 +01:00
|
|
|
|
|
|
|
|
|
|
public:
|
|
|
|
|
|
template<class CAL, class DEP, class CON>
|
2023-12-04 23:31:21 +01:00
|
|
|
|
RandomChainPlanFunctor(Node& nodeArray, size_t nodeCnt,
|
2023-12-04 16:29:57 +01:00
|
|
|
|
CAL&& schedule, DEP&& markDepend,
|
|
|
|
|
|
CON&& continuation)
|
|
|
|
|
|
: scheduleCalcJob_{forward<CAL> (schedule)}
|
2023-12-04 23:31:21 +01:00
|
|
|
|
, markDependency_{forward<DEP> (markDepend)}
|
|
|
|
|
|
, continuation_{forward<CON> (continuation)}
|
|
|
|
|
|
, maxCnt_{nodeCnt}
|
2023-12-04 16:29:57 +01:00
|
|
|
|
, nodes_{&nodeArray}
|
|
|
|
|
|
{ }
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-12-04 23:31:21 +01:00
|
|
|
|
/** render job invocation to trigger one batch of scheduling;
|
|
|
|
|
|
* the installed callback-λ should actually place a job with
|
|
|
|
|
|
* RandomChainCalcFunctor for each node, and also inform the
|
|
|
|
|
|
* Scheduler about dependency relations between jobs. */
|
2023-12-04 16:29:57 +01:00
|
|
|
|
void
|
|
|
|
|
|
invokeJobOperation (JobParameter param) override
|
|
|
|
|
|
{
|
2023-12-08 23:52:57 +01:00
|
|
|
|
size_t reachedLevel{0};
|
|
|
|
|
|
size_t targetNodeIDX = decodeNodeID (param.invoKey);
|
2023-12-04 16:29:57 +01:00
|
|
|
|
for ( ; currIdx_<maxCnt_; ++currIdx_)
|
|
|
|
|
|
{
|
|
|
|
|
|
Node* n = &nodes_[currIdx_];
|
2023-12-08 23:52:57 +01:00
|
|
|
|
if (currIdx_ <= targetNodeIDX)
|
|
|
|
|
|
reachedLevel = n->level;
|
|
|
|
|
|
else // continue until end of current level
|
|
|
|
|
|
if (n->level > reachedLevel)
|
|
|
|
|
|
break;
|
2023-12-04 16:29:57 +01:00
|
|
|
|
scheduleCalcJob_(currIdx_, n->level);
|
2023-12-04 23:31:21 +01:00
|
|
|
|
for (Node* pred: n->pred)
|
2023-12-04 16:29:57 +01:00
|
|
|
|
markDependency_(pred,n);
|
|
|
|
|
|
}
|
2023-12-08 23:52:57 +01:00
|
|
|
|
ENSURE (currIdx_ > 0);
|
|
|
|
|
|
continuation_(currIdx_-1, reachedLevel, currIdx_ < maxCnt_);
|
2023-12-04 16:29:57 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
string diagnostic() const override
|
|
|
|
|
|
{
|
|
|
|
|
|
return "ChainPlan";
|
|
|
|
|
|
}
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-12-05 23:53:42 +01:00
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
|
* Setup and wiring for a test run to schedule a computation structure
|
|
|
|
|
|
* as defined by this TestChainLoad instance. This context is linked to
|
|
|
|
|
|
* a concrete TestChainLoad and Scheduler instance and holds a memory block
|
|
|
|
|
|
* with actual schedules, which are dispatched in batches into the Scheduler.
|
|
|
|
|
|
* It is *crucial* to keep this object *alive during the complete test run*,
|
|
|
|
|
|
* which is achieved by a blocking wait on the callback triggered after
|
|
|
|
|
|
* dispatching the last batch of calculation jobs. This process itself
|
|
|
|
|
|
* is meant for test usage and not thread-safe (while obviously the
|
|
|
|
|
|
* actual scheduling and processing happens in the worker threads).
|
|
|
|
|
|
* Yet the instance can be re-used to dispatch further test runs.
|
|
|
|
|
|
*/
|
|
|
|
|
|
template<size_t maxFan>
|
|
|
|
|
|
class TestChainLoad<maxFan>::ScheduleCtx
|
|
|
|
|
|
: util::MoveOnly
|
|
|
|
|
|
{
|
|
|
|
|
|
TestChainLoad& chainLoad_;
|
|
|
|
|
|
Scheduler& scheduler_;
|
|
|
|
|
|
|
|
|
|
|
|
lib::UninitialisedDynBlock<ScheduleSpec> schedule_;
|
|
|
|
|
|
|
2023-12-23 19:33:55 +01:00
|
|
|
|
FrameRate levelSpeed_{1, SCHEDULE_LEVEL_STEP};
|
|
|
|
|
|
FrameRate planSpeed_{1, SCHEDULE_PLAN_STEP};
|
2023-12-11 19:42:23 +01:00
|
|
|
|
uint blockLoadFactor_{2};
|
2023-12-05 23:53:42 +01:00
|
|
|
|
size_t chunkSize_{DEFAULT_CHUNKSIZE};
|
|
|
|
|
|
TimeVar startTime_{Time::ANYTIME};
|
|
|
|
|
|
microseconds deadline_{STANDARD_DEADLINE};
|
2023-12-23 19:33:55 +01:00
|
|
|
|
microseconds preRoll_{guessPlanningPreroll()};
|
2023-12-05 23:53:42 +01:00
|
|
|
|
ManifestationID manID_{};
|
|
|
|
|
|
|
|
|
|
|
|
std::promise<void> signalDone_{};
|
|
|
|
|
|
|
2023-12-11 19:42:23 +01:00
|
|
|
|
std::unique_ptr<ComputationalLoad> compuLoad_;
|
2023-12-06 02:17:02 +01:00
|
|
|
|
std::unique_ptr<RandomChainCalcFunctor<maxFan>> calcFunctor_;
|
|
|
|
|
|
std::unique_ptr<RandomChainPlanFunctor<maxFan>> planFunctor_;
|
2023-12-05 23:53:42 +01:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/* ==== Callbacks from job planning ==== */
|
|
|
|
|
|
|
|
|
|
|
|
/** Callback: place a single job into the scheduler */
|
|
|
|
|
|
void
|
|
|
|
|
|
disposeStep (size_t idx, size_t level)
|
|
|
|
|
|
{
|
|
|
|
|
|
schedule_[idx] = scheduler_.defineSchedule(calcJob (idx,level))
|
|
|
|
|
|
.manifestation(manID_)
|
|
|
|
|
|
.startTime (calcStartTime(level))
|
|
|
|
|
|
.lifeWindow (deadline_)
|
|
|
|
|
|
.post();
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/** Callback: define a dependency between scheduled jobs */
|
|
|
|
|
|
void
|
|
|
|
|
|
setDependency (Node* pred, Node* succ)
|
|
|
|
|
|
{
|
|
|
|
|
|
size_t predIdx = chainLoad_.nodeID (pred);
|
|
|
|
|
|
size_t succIdx = chainLoad_.nodeID (succ);
|
|
|
|
|
|
schedule_[predIdx].linkToSuccessor (schedule_[succIdx]);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/** continue planning: schedule follow-up planning job */
|
|
|
|
|
|
void
|
2023-12-08 23:52:57 +01:00
|
|
|
|
continuation (size_t lastNodeIDX, size_t levelDone, bool work_left)
|
2023-12-05 23:53:42 +01:00
|
|
|
|
{
|
|
|
|
|
|
if (work_left)
|
|
|
|
|
|
{
|
2023-12-08 23:52:57 +01:00
|
|
|
|
size_t nextChunkEndNode = calcNextChunkEnd (lastNodeIDX);
|
2023-12-23 19:33:55 +01:00
|
|
|
|
scheduler_.continueMetaJob (calcPlanScheduleTime (lastNodeIDX+1)
|
2023-12-08 23:52:57 +01:00
|
|
|
|
,planningJob (nextChunkEndNode)
|
2023-12-05 23:53:42 +01:00
|
|
|
|
,manID_);
|
|
|
|
|
|
}
|
|
|
|
|
|
else
|
2023-12-13 22:55:28 +01:00
|
|
|
|
scheduler_.defineSchedule(wakeUpJob())
|
|
|
|
|
|
.manifestation (manID_)
|
|
|
|
|
|
.startTime(calcStartTime (levelDone+1))
|
2023-12-14 01:49:46 +01:00
|
|
|
|
.lifeWindow(SAFETY_TIMEOUT)
|
2023-12-13 22:55:28 +01:00
|
|
|
|
.post()
|
|
|
|
|
|
.linkToPredecessor (schedule_[lastNodeIDX])
|
|
|
|
|
|
; // Setup wait-dependency on last computation
|
2023-12-05 23:53:42 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-13 22:55:28 +01:00
|
|
|
|
|
2023-12-05 23:53:42 +01:00
|
|
|
|
std::future<void>
|
|
|
|
|
|
performRun()
|
|
|
|
|
|
{
|
2023-12-07 02:39:40 +01:00
|
|
|
|
auto finished = attachNewCompletionSignal();
|
2023-12-05 23:53:42 +01:00
|
|
|
|
size_t numNodes = chainLoad_.size();
|
2023-12-08 23:52:57 +01:00
|
|
|
|
size_t firstChunkEndNode = calcNextChunkEnd(0)-1;
|
2023-12-05 23:53:42 +01:00
|
|
|
|
schedule_.allocate (numNodes);
|
2023-12-11 19:42:23 +01:00
|
|
|
|
compuLoad_->maybeCalibrate();
|
2023-12-05 23:53:42 +01:00
|
|
|
|
startTime_ = anchorStartTime();
|
2023-12-08 23:52:57 +01:00
|
|
|
|
scheduler_.seedCalcStream (planningJob(firstChunkEndNode)
|
2023-12-05 23:53:42 +01:00
|
|
|
|
,manID_
|
|
|
|
|
|
,calcLoadHint());
|
2023-12-07 02:39:40 +01:00
|
|
|
|
return finished;
|
2023-12-05 23:53:42 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
public:
|
|
|
|
|
|
ScheduleCtx (TestChainLoad& mother, Scheduler& scheduler)
|
|
|
|
|
|
: chainLoad_{mother}
|
|
|
|
|
|
, scheduler_{scheduler}
|
2023-12-11 19:42:23 +01:00
|
|
|
|
, compuLoad_{new ComputationalLoad}
|
|
|
|
|
|
, calcFunctor_{new RandomChainCalcFunctor<maxFan>{chainLoad_.nodes_[0], compuLoad_.get()}}
|
2023-12-06 02:17:02 +01:00
|
|
|
|
, planFunctor_{new RandomChainPlanFunctor<maxFan>{chainLoad_.nodes_[0], chainLoad_.numNodes_
|
|
|
|
|
|
,[this](size_t i, size_t l){ disposeStep(i,l); }
|
|
|
|
|
|
,[this](auto* p, auto* s) { setDependency(p,s);}
|
2023-12-08 23:52:57 +01:00
|
|
|
|
,[this](size_t n,size_t l, bool w){ continuation(n,l,w); }
|
2023-12-06 02:17:02 +01:00
|
|
|
|
}}
|
2023-12-05 23:53:42 +01:00
|
|
|
|
{ }
|
|
|
|
|
|
|
2023-12-18 23:34:10 +01:00
|
|
|
|
/** dispose one complete run of the graph into the scheduler
|
|
|
|
|
|
* @return observed runtime in µs
|
|
|
|
|
|
*/
|
|
|
|
|
|
double
|
2023-12-05 23:53:42 +01:00
|
|
|
|
launch_and_wait()
|
|
|
|
|
|
{
|
2023-12-18 23:34:10 +01:00
|
|
|
|
return benchmarkTime ([this]
|
|
|
|
|
|
{
|
|
|
|
|
|
awaitBlocking(
|
|
|
|
|
|
performRun());
|
|
|
|
|
|
});
|
2023-12-05 23:53:42 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-12-09 01:20:53 +01:00
|
|
|
|
/* ===== Setter / builders for custom configuration ===== */
|
|
|
|
|
|
|
|
|
|
|
|
ScheduleCtx&&
|
|
|
|
|
|
withLevelDuration (microseconds plannedTime_per_level)
|
|
|
|
|
|
{
|
|
|
|
|
|
levelSpeed_ = FrameRate{1, Duration{_uTicks(plannedTime_per_level)}};
|
2023-12-13 04:12:35 +01:00
|
|
|
|
return move(*this);
|
2023-12-09 01:20:53 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-23 19:33:55 +01:00
|
|
|
|
ScheduleCtx&&
|
|
|
|
|
|
withPlanningStep (microseconds planningTime_per_node)
|
|
|
|
|
|
{
|
|
|
|
|
|
planSpeed_ = FrameRate{1, Duration{_uTicks(planningTime_per_node)}};
|
|
|
|
|
|
preRoll_ = guessPlanningPreroll();
|
|
|
|
|
|
return move(*this);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-09 01:20:53 +01:00
|
|
|
|
ScheduleCtx&&
|
|
|
|
|
|
withLoadFactor (uint factor_on_levelSpeed)
|
|
|
|
|
|
{
|
2023-12-11 19:42:23 +01:00
|
|
|
|
blockLoadFactor_ = factor_on_levelSpeed;
|
2023-12-13 04:12:35 +01:00
|
|
|
|
return move(*this);
|
2023-12-09 01:20:53 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
ScheduleCtx&&
|
|
|
|
|
|
withChunkSize (size_t nodes_per_chunk)
|
|
|
|
|
|
{
|
|
|
|
|
|
chunkSize_ = nodes_per_chunk;
|
2023-12-23 19:33:55 +01:00
|
|
|
|
preRoll_ = guessPlanningPreroll();
|
2023-12-13 04:12:35 +01:00
|
|
|
|
return move(*this);
|
2023-12-09 01:20:53 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
ScheduleCtx&&
|
|
|
|
|
|
withPreRoll (microseconds planning_headstart)
|
|
|
|
|
|
{
|
|
|
|
|
|
preRoll_ = planning_headstart;
|
2023-12-13 04:12:35 +01:00
|
|
|
|
return move(*this);
|
2023-12-09 01:20:53 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
ScheduleCtx&&
|
|
|
|
|
|
withJobDeadline (microseconds deadline_after_start)
|
|
|
|
|
|
{
|
|
|
|
|
|
deadline_ = deadline_after_start;
|
2023-12-13 04:12:35 +01:00
|
|
|
|
return move(*this);
|
2023-12-09 01:20:53 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
ScheduleCtx&&
|
|
|
|
|
|
withManifestation (ManifestationID manID)
|
|
|
|
|
|
{
|
|
|
|
|
|
manID_ = manID;
|
2023-12-13 04:12:35 +01:00
|
|
|
|
return move(*this);
|
2023-12-09 01:20:53 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-11 19:42:23 +01:00
|
|
|
|
ScheduleCtx&&
|
|
|
|
|
|
withLoadTimeBase (microseconds timeBase =LOAD_DEFAULT_TIME)
|
|
|
|
|
|
{
|
|
|
|
|
|
compuLoad_->timeBase = timeBase;
|
2023-12-13 04:12:35 +01:00
|
|
|
|
return move(*this);
|
2023-12-11 19:42:23 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
ScheduleCtx&&
|
|
|
|
|
|
deactivateLoad()
|
|
|
|
|
|
{
|
|
|
|
|
|
compuLoad_->timeBase = 0us;
|
2023-12-13 04:12:35 +01:00
|
|
|
|
return move(*this);
|
2023-12-11 19:42:23 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
ScheduleCtx&&
|
|
|
|
|
|
withLoadMem (size_t sizeBase =LOAD_DEFAULT_MEM_SIZE)
|
|
|
|
|
|
{
|
|
|
|
|
|
if (not sizeBase)
|
|
|
|
|
|
{
|
|
|
|
|
|
compuLoad_->sizeBase = LOAD_DEFAULT_MEM_SIZE;
|
|
|
|
|
|
compuLoad_->useAllocation =false;
|
|
|
|
|
|
}
|
|
|
|
|
|
else
|
|
|
|
|
|
{
|
|
|
|
|
|
compuLoad_->sizeBase = sizeBase;
|
|
|
|
|
|
compuLoad_->useAllocation =true;
|
|
|
|
|
|
}
|
2023-12-13 04:12:35 +01:00
|
|
|
|
return move(*this);
|
2023-12-11 19:42:23 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-05 23:53:42 +01:00
|
|
|
|
private:
|
|
|
|
|
|
/** push away any existing wait state and attach new clean state */
|
|
|
|
|
|
std::future<void>
|
|
|
|
|
|
attachNewCompletionSignal()
|
|
|
|
|
|
{
|
|
|
|
|
|
std::promise<void> notYetTriggered;
|
|
|
|
|
|
signalDone_.swap (notYetTriggered);
|
|
|
|
|
|
return signalDone_.get_future();
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
void
|
|
|
|
|
|
awaitBlocking(std::future<void> signal)
|
|
|
|
|
|
{
|
|
|
|
|
|
if (std::future_status::timeout == signal.wait_for (SAFETY_TIMEOUT))
|
|
|
|
|
|
throw err::Fatal("Timeout on Scheduler test exceeded.");
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
Job
|
|
|
|
|
|
calcJob (size_t idx, size_t level)
|
|
|
|
|
|
{
|
2023-12-06 02:17:02 +01:00
|
|
|
|
return Job{*calcFunctor_
|
|
|
|
|
|
, calcFunctor_->encodeNodeID(idx)
|
|
|
|
|
|
, calcFunctor_->encodeLevel(level)
|
2023-12-05 23:53:42 +01:00
|
|
|
|
};
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
Job
|
2023-12-08 23:52:57 +01:00
|
|
|
|
planningJob (size_t endNodeIDX)
|
2023-12-05 23:53:42 +01:00
|
|
|
|
{
|
2023-12-06 02:17:02 +01:00
|
|
|
|
return Job{*planFunctor_
|
2023-12-08 23:52:57 +01:00
|
|
|
|
, planFunctor_->encodeNodeID(endNodeIDX)
|
|
|
|
|
|
, Time::ANYTIME
|
2023-12-05 23:53:42 +01:00
|
|
|
|
};
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-08 04:22:12 +01:00
|
|
|
|
Job
|
|
|
|
|
|
wakeUpJob ()
|
|
|
|
|
|
{
|
|
|
|
|
|
SpecialJobFun wakeUpFun{[this](JobParameter)
|
|
|
|
|
|
{
|
|
|
|
|
|
signalDone_.set_value();
|
|
|
|
|
|
}};
|
|
|
|
|
|
return Job{ wakeUpFun
|
|
|
|
|
|
, InvocationInstanceID()
|
|
|
|
|
|
, Time::ANYTIME
|
|
|
|
|
|
};
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-05 23:53:42 +01:00
|
|
|
|
Time
|
|
|
|
|
|
anchorStartTime()
|
|
|
|
|
|
{
|
2023-12-11 23:55:55 +01:00
|
|
|
|
return RealClock::now() + _uTicks(preRoll_);
|
2023-12-05 23:53:42 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-23 19:33:55 +01:00
|
|
|
|
microseconds
|
|
|
|
|
|
guessPlanningPreroll()
|
2023-12-09 01:20:53 +01:00
|
|
|
|
{
|
2023-12-23 19:33:55 +01:00
|
|
|
|
return microseconds(_raw(Time{chunkSize_ / planSpeed_}));
|
2023-12-09 01:20:53 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
2023-12-05 23:53:42 +01:00
|
|
|
|
FrameRate
|
|
|
|
|
|
calcLoadHint()
|
|
|
|
|
|
{
|
2023-12-11 19:42:23 +01:00
|
|
|
|
return FrameRate{levelSpeed_ * blockLoadFactor_};
|
2023-12-05 23:53:42 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
size_t
|
2023-12-08 23:52:57 +01:00
|
|
|
|
calcNextChunkEnd (size_t lastNodeIDX)
|
2023-12-05 23:53:42 +01:00
|
|
|
|
{
|
2023-12-21 20:24:51 +01:00
|
|
|
|
lastNodeIDX += chunkSize_;
|
|
|
|
|
|
return min (lastNodeIDX, chainLoad_.size()-1);
|
|
|
|
|
|
} // prevent out-of-bound access
|
2023-12-05 23:53:42 +01:00
|
|
|
|
|
|
|
|
|
|
Time
|
|
|
|
|
|
calcStartTime(size_t level)
|
|
|
|
|
|
{
|
2023-12-06 02:17:02 +01:00
|
|
|
|
return startTime_ + Time{level / levelSpeed_};
|
2023-12-05 23:53:42 +01:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
Time
|
2023-12-08 23:52:57 +01:00
|
|
|
|
calcPlanScheduleTime (size_t lastNodeIDX)
|
2023-12-05 23:53:42 +01:00
|
|
|
|
{/* must be at least 1 level ahead,
|
|
|
|
|
|
because dependencies are defined backwards;
|
|
|
|
|
|
the chain-load graph only defines dependencies over one level
|
|
|
|
|
|
thus the first level in the next chunk must still be able to attach
|
|
|
|
|
|
dependencies to the last row of the preceding chunk, implying that
|
2023-12-11 19:42:23 +01:00
|
|
|
|
those still need to be ahead of schedule, and not yet dispatched.
|
2023-12-05 23:53:42 +01:00
|
|
|
|
*/
|
2023-12-21 20:24:51 +01:00
|
|
|
|
lastNodeIDX = min (lastNodeIDX, chainLoad_.size()-1); // prevent out-of-bound access
|
2023-12-08 23:52:57 +01:00
|
|
|
|
size_t nextChunkLevel = chainLoad_.nodes_[lastNodeIDX].level;
|
2023-12-05 23:53:42 +01:00
|
|
|
|
nextChunkLevel = nextChunkLevel>2? nextChunkLevel-2 : 0;
|
2023-12-09 01:20:53 +01:00
|
|
|
|
return calcStartTime(nextChunkLevel) - _uTicks(preRoll_);
|
2023-12-05 23:53:42 +01:00
|
|
|
|
}
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
|
* establish and configure the context used for scheduling computations.
|
2023-12-06 02:17:02 +01:00
|
|
|
|
* @note clears hashes and re-propagates seed in the node graph beforehand.
|
2023-12-05 23:53:42 +01:00
|
|
|
|
*/
|
|
|
|
|
|
template<size_t maxFan>
|
|
|
|
|
|
typename TestChainLoad<maxFan>::ScheduleCtx
|
2023-12-11 22:55:11 +01:00
|
|
|
|
TestChainLoad<maxFan>::setupSchedule (Scheduler& scheduler)
|
2023-12-05 23:53:42 +01:00
|
|
|
|
{
|
2023-12-06 02:17:02 +01:00
|
|
|
|
clearNodeHashes();
|
2023-12-05 23:53:42 +01:00
|
|
|
|
return ScheduleCtx{*this, scheduler};
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2023-11-10 23:54:47 +01:00
|
|
|
|
}}} // namespace vault::gear::test
|
|
|
|
|
|
#endif /*VAULT_GEAR_TEST_TEST_CHAIN_LOAD_H*/
|