It turns out to be not correct using all the divergence in concurrency as a form factor, since it is quite common that not all cores can be active at every level, given the structural constraints as dictated by the load graph. On the other hand, if the empirical work (non wait-time) concurrency systematically differs from the simple model used for establishing the schedule, then this should indeed be considered a form factor and deduced from the effective stress factor, since it is not a reserve available for speed-up The solution entertained here is to derive an effective compounded sum of weights from the calculation used to build the schedule. This compounded weight sum is typically lower than the plain sum of all node weights, which is precisely due to the theoretical amount of expense reduction assumed in the schedule generation. So this gives us a handle at the theoretically expected expense and through the plain weight sum, we may draw conclusion about the effective concurrency expected in this schedule. Taking only this part as base for the empirical deviations yields search results very close to stressFactor ~1 -- implying that the test setup now observes what was intended to observe... |
||
|---|---|---|
| .. | ||
| gear | ||
| mem | ||
| DIR_INFO | ||