2024-05-12 19:05:50 +02:00
|
|
|
/*
|
|
|
|
|
SEVERAL-BUILDER.hpp - builder for a limited fixed collection of elements
|
|
|
|
|
|
|
|
|
|
Copyright (C) Lumiera.org
|
|
|
|
|
2024, Hermann Vosseler <Ichthyostega@web.de>
|
|
|
|
|
|
|
|
|
|
This program is free software; you can redistribute it and/or
|
|
|
|
|
modify it under the terms of the GNU General Public License as
|
|
|
|
|
published by the Free Software Foundation; either version 2 of
|
|
|
|
|
the License, or (at your option) any later version.
|
|
|
|
|
|
|
|
|
|
This program is distributed in the hope that it will be useful,
|
|
|
|
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
|
GNU General Public License for more details.
|
|
|
|
|
|
|
|
|
|
You should have received a copy of the GNU General Public License
|
|
|
|
|
along with this program; if not, write to the Free Software
|
|
|
|
|
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
|
|
|
|
|
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
/** @file several-builder.hpp
|
2024-06-11 00:57:24 +02:00
|
|
|
** Builder to create and populate instances of the lib::Several container.
|
|
|
|
|
** For mere usage, inclusion of several.hpp should be sufficient, since the
|
|
|
|
|
** container front-end is generic and intends to hide most details of allocation
|
|
|
|
|
** and element placement. It is an array-like container, but may hold subclass
|
|
|
|
|
** elements, while exposing only a reference to the interface type.
|
2024-05-12 19:05:50 +02:00
|
|
|
**
|
2024-06-11 00:57:24 +02:00
|
|
|
** # Implementation data layout
|
2024-05-12 19:05:50 +02:00
|
|
|
**
|
2024-06-11 00:57:24 +02:00
|
|
|
** The front-end container lib::Several<I> is actually just a smart-ptr referring
|
|
|
|
|
** to the actual data storage, which resides within an _array bucket._ Typically
|
|
|
|
|
** the latter is placed into memory managed by a custom allocator, most notably
|
|
|
|
|
** lib::AllocationCluster. However, by default, the ArrayBucket<I> will be placed
|
|
|
|
|
** into heap memory. All further meta information is also maintained alongside
|
|
|
|
|
** this data allocation, including a _deleter function_ to invoke all element
|
|
|
|
|
** destructors and de-allocate the bucket itself. Neither the type of the
|
|
|
|
|
** actual elements, nor the type of the allocator is revealed.
|
2024-05-12 19:05:50 +02:00
|
|
|
**
|
2024-06-11 00:57:24 +02:00
|
|
|
** Since the actual data elements can (optionally) be of a different type than
|
|
|
|
|
** the exposed interface type \a I, additional storage and spacing is required
|
|
|
|
|
** in the element array. The field ArrayBucket<I>::spread defines this spacing
|
Library: better handle the alignment issues explicitly
While there might be the possibility to use the magic of the standard library,
it seems prudent rather to handle this insidious problem explicitly,
to make clear what is going on here.
To allow for such explicit alignment handling, I have now changed the
scheme of the storage definition; the actual buffer now starts ''behind''
the `ArrayBucket<I>` object, which thereby becomes a metadata managing header.
__To summarise the problem__: since we are maintaining a dynamically sized buffer,
and since we do not want to expose the actual element type through the
front-end object, we're necessarily bound to perform a raw-memory allocation.
This is denoted in bytes, and thus the allocator can no longer manage
the proper alignment automatically. Rather, we get a storage buffer with
just ''some accidental'' alignment, and we must care to request a sufficient
overhead to be able to shift the actual storage area forward to the next
proper alignment boundary. Obviously this also implies that we must
store this individual padding adjustment somewhere in the metadata,
in order to be able to report the correct size of the block later
on de-allocation.
2024-06-18 01:15:50 +02:00
|
|
|
** and thus the offset used for subscript access. The actual data storage starts
|
|
|
|
|
** immediately behind the ArrayBucket, which thus acts as a metadata header.
|
|
|
|
|
** This arrangement requires a sufficiently sized raw memory allocation to place
|
|
|
|
|
** the ArrayBucket and the actual data into. Moreover, the allocation code in
|
|
|
|
|
** ElementFactory::create() is responsible to ensure proper alignment of the
|
|
|
|
|
** data storage, especially when the payload data type has alignment requirements
|
|
|
|
|
** beyond `alignof(void*)`, which is typically used by the standard heap allocator;
|
|
|
|
|
** additional headroom is added proactively in this case, to be able to shift the
|
2024-06-19 19:40:03 +02:00
|
|
|
** storage buffer ahead to the next alignment boundary.
|
2024-06-11 00:57:24 +02:00
|
|
|
**
|
2024-06-13 19:04:44 +02:00
|
|
|
** # Handling of data elements
|
|
|
|
|
**
|
|
|
|
|
** The ability to emplace a mixture of data types into the storage exposed through
|
|
|
|
|
** the lib::Several front-end creates some complexities related to element handling.
|
2024-06-19 19:40:03 +02:00
|
|
|
** The implementation uses generic rules and criteria based approach to decide on
|
2024-06-13 19:04:44 +02:00
|
|
|
** a case by case base if some given data content is still acceptable. This allows
|
|
|
|
|
** for rather tricky low-level usages, but has the downside to detect errors only
|
|
|
|
|
** at runtime — which in this case is ameliorated by the limitation that elements
|
|
|
|
|
** must be provided completely up-front, through the SeveralBuilder.
|
|
|
|
|
** - in order to handle any data element, we must be able to invoke its destructor
|
2024-06-19 19:40:03 +02:00
|
|
|
** - an arbitrary mixture of types can thus only be accepted if we can either
|
2024-06-13 19:04:44 +02:00
|
|
|
** rely on a common virtual base class destructor, or if all data elements
|
|
|
|
|
** are trivially destructible; these properties can be detected at compile
|
|
|
|
|
** time with the help of the C++ `<type_traits>` library
|
|
|
|
|
** - this container can accommodate _non-copyable_ data types, under the proviso
|
2024-06-19 19:40:03 +02:00
|
|
|
** that the all the necessary storage is pre-allocated (using `reserve()` from
|
2024-06-13 19:04:44 +02:00
|
|
|
** the builder API)
|
|
|
|
|
** - otherwise, data can be filled in dynamically, expanding the storage as needed,
|
|
|
|
|
** given that all existing elements can be safely re-located by move or copy
|
|
|
|
|
** constructor into a new, larger storage buffer.
|
|
|
|
|
** - alternatively, when data elements are even ''trivially copyable'' (e.g. POD data),
|
|
|
|
|
** then it is even possible to increase the placement spread in the storage at the
|
|
|
|
|
** point when the requirement to do so is discovered dynamically; objects can be
|
|
|
|
|
** shifted to other locations by `std::memmove()` in this case.
|
|
|
|
|
** - notably, lib::AllocationCluster has the ability to dynamically adapt an allocation,
|
|
|
|
|
** but only if this happens to be currently the last allocation handed out; it can
|
|
|
|
|
** thus be arranged even for an unknown number of non-copyable objects to be emplaced
|
|
|
|
|
** when creating the suitable operational conditions.
|
|
|
|
|
** A key point to note is the fact that the container does not capture and store the
|
|
|
|
|
** actual data types persistently. Thus, the above rules must be applied in a way
|
|
|
|
|
** to always ensure safe handling of the contained data. Typically, the first element
|
|
|
|
|
** actually added will »prime« the container for a certain usage style, and after that,
|
|
|
|
|
** some other usage patterns may be rejected.
|
|
|
|
|
**
|
2024-06-11 00:57:24 +02:00
|
|
|
** @todo this is a first implementation solution from 6/2025 — and was deemed
|
|
|
|
|
** _roughly adequate_ at that time, yet should be revalidated once more
|
|
|
|
|
** observations pertaining real-world usage are available...
|
2024-06-13 19:04:44 +02:00
|
|
|
** @warning there is a known problem with _over-alligned-types,_ which becomes
|
|
|
|
|
** relevant when the _interface type_ has only lower alignment requirement,
|
|
|
|
|
** but an individual element is added with higher alignment requirements.
|
|
|
|
|
** In this case, while the spread is increased, still the placement of
|
2024-06-19 19:40:03 +02:00
|
|
|
** the element-type \a E is used as anchor, possibly leading to misalignment.
|
2024-05-12 19:05:50 +02:00
|
|
|
** @see several-builder-test.cpp
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
#ifndef LIB_SEVERAL_BUILDER_H
|
|
|
|
|
#define LIB_SEVERAL_BUILDER_H
|
|
|
|
|
|
|
|
|
|
|
2024-06-07 19:04:06 +02:00
|
|
|
#include "lib/error.hpp"
|
2024-05-12 19:05:50 +02:00
|
|
|
#include "lib/several.hpp"
|
2024-06-07 19:04:06 +02:00
|
|
|
#include "include/limits.hpp"
|
2024-05-28 18:52:01 +02:00
|
|
|
#include "lib/iter-explorer.hpp"
|
2024-06-07 19:04:06 +02:00
|
|
|
#include "lib/format-string.hpp"
|
2024-05-28 18:52:01 +02:00
|
|
|
#include "lib/util.hpp"
|
2024-05-12 19:05:50 +02:00
|
|
|
|
2024-06-04 23:24:11 +02:00
|
|
|
#include <type_traits>
|
2024-06-09 17:19:40 +02:00
|
|
|
#include <functional>
|
2024-05-29 01:01:16 +02:00
|
|
|
#include <cstring>
|
2024-05-28 18:07:08 +02:00
|
|
|
#include <utility>
|
2024-05-12 19:05:50 +02:00
|
|
|
#include <vector>
|
2024-05-28 18:07:08 +02:00
|
|
|
|
2024-05-12 19:05:50 +02:00
|
|
|
|
|
|
|
|
|
|
|
|
|
namespace lib {
|
2024-06-07 19:04:06 +02:00
|
|
|
namespace err = lumiera::error;
|
|
|
|
|
|
2024-05-28 18:52:01 +02:00
|
|
|
using std::vector;
|
|
|
|
|
using std::forward;
|
|
|
|
|
using std::move;
|
2024-05-29 01:01:16 +02:00
|
|
|
using std::byte;
|
2024-05-12 19:05:50 +02:00
|
|
|
|
2024-07-17 01:43:17 +02:00
|
|
|
namespace {
|
2024-06-08 23:59:47 +02:00
|
|
|
/** number of storage slots to open initially;
|
|
|
|
|
* starting with an over-allocation similar to `std::vector`
|
|
|
|
|
*/
|
|
|
|
|
const uint INITIAL_ELM_CNT = 10;
|
|
|
|
|
|
|
|
|
|
|
2024-06-07 19:04:06 +02:00
|
|
|
using util::max;
|
|
|
|
|
using util::min;
|
|
|
|
|
using util::_Fmt;
|
2024-06-19 15:22:26 +02:00
|
|
|
using util::positiveDiff;
|
2024-06-12 17:07:55 +02:00
|
|
|
using std::is_nothrow_move_constructible_v;
|
2024-06-11 02:48:23 +02:00
|
|
|
using std::is_trivially_move_constructible_v;
|
2024-06-09 23:45:24 +02:00
|
|
|
using std::is_trivially_destructible_v;
|
2024-06-11 02:48:23 +02:00
|
|
|
using std::has_virtual_destructor_v;
|
|
|
|
|
using std::is_trivially_copyable_v;
|
2024-06-12 17:07:55 +02:00
|
|
|
using std::is_copy_constructible_v;
|
2024-06-11 23:22:00 +02:00
|
|
|
using std::is_object_v;
|
|
|
|
|
using std::is_volatile_v;
|
|
|
|
|
using std::is_const_v;
|
2024-06-11 02:48:23 +02:00
|
|
|
using std::is_same_v;
|
|
|
|
|
using lib::meta::is_Subclass;
|
2024-07-07 04:20:46 +02:00
|
|
|
|
|
|
|
|
using several::ArrayBucket;
|
2024-06-11 02:48:23 +02:00
|
|
|
|
2024-06-06 23:15:49 +02:00
|
|
|
|
2024-06-11 00:57:24 +02:00
|
|
|
/**
|
|
|
|
|
* Helper to determine the »spread« required to hold
|
|
|
|
|
* elements of type \a TY in memory _with proper alignment._
|
|
|
|
|
*/
|
|
|
|
|
template<typename TY>
|
|
|
|
|
size_t inline constexpr
|
|
|
|
|
reqSiz()
|
|
|
|
|
{
|
|
|
|
|
size_t quant = alignof(TY);
|
|
|
|
|
size_t siz = max (sizeof(TY), quant);
|
|
|
|
|
size_t req = (siz/quant) * quant;
|
|
|
|
|
if (req < siz)
|
|
|
|
|
req += quant;
|
|
|
|
|
return req;
|
|
|
|
|
}
|
|
|
|
|
|
2024-06-19 19:40:03 +02:00
|
|
|
/** determine size of a reserve buffer to place with proper alignment */
|
2024-06-19 17:35:46 +02:00
|
|
|
size_t inline constexpr
|
|
|
|
|
alignRes (size_t alignment)
|
|
|
|
|
{
|
|
|
|
|
return positiveDiff (alignment, alignof(void*));
|
|
|
|
|
}
|
2024-07-17 01:43:17 +02:00
|
|
|
}//(End)helpers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
namespace allo {// Allocation management policies
|
2024-06-11 00:57:24 +02:00
|
|
|
|
2024-06-19 19:40:03 +02:00
|
|
|
/**
|
|
|
|
|
* Generic factory to manage objects within an ArrayBucket<I> storage,
|
|
|
|
|
* delegating to a custom allocator \a ALO for memory handling.
|
|
|
|
|
* - #create a storage block for a number of objects
|
|
|
|
|
* - #createAt construct a single payload object at index position
|
|
|
|
|
* - #destroy a storage block with proper clean-up (invoke dtors)
|
|
|
|
|
*/
|
2024-06-06 21:13:50 +02:00
|
|
|
template<class I, template<typename> class ALO>
|
2024-06-06 21:53:04 +02:00
|
|
|
class ElementFactory
|
2024-06-19 00:35:56 +02:00
|
|
|
: protected ALO<std::byte>
|
2024-06-06 18:41:07 +02:00
|
|
|
{
|
2024-06-06 21:13:50 +02:00
|
|
|
using Allo = ALO<std::byte>;
|
2024-06-06 18:41:07 +02:00
|
|
|
using AlloT = std::allocator_traits<Allo>;
|
2024-06-06 21:53:04 +02:00
|
|
|
using Bucket = ArrayBucket<I>;
|
2024-06-06 18:41:07 +02:00
|
|
|
|
2024-06-06 21:13:50 +02:00
|
|
|
Allo& baseAllocator() { return *this; }
|
|
|
|
|
|
2024-06-06 18:41:07 +02:00
|
|
|
template<typename X>
|
2024-06-06 21:13:50 +02:00
|
|
|
auto
|
2024-06-06 18:41:07 +02:00
|
|
|
adaptAllocator()
|
|
|
|
|
{
|
|
|
|
|
using XAllo = typename AlloT::template rebind_alloc<X>;
|
|
|
|
|
if constexpr (std::is_constructible_v<XAllo, Allo>)
|
2024-06-06 21:13:50 +02:00
|
|
|
return XAllo{baseAllocator()};
|
2024-06-06 18:41:07 +02:00
|
|
|
else
|
|
|
|
|
return XAllo{};
|
|
|
|
|
}
|
|
|
|
|
|
2024-06-06 21:53:04 +02:00
|
|
|
public:
|
2024-06-06 21:13:50 +02:00
|
|
|
ElementFactory (Allo allo = Allo{})
|
|
|
|
|
: Allo{std::move (allo)}
|
|
|
|
|
{ }
|
|
|
|
|
|
Library: better handle the alignment issues explicitly
While there might be the possibility to use the magic of the standard library,
it seems prudent rather to handle this insidious problem explicitly,
to make clear what is going on here.
To allow for such explicit alignment handling, I have now changed the
scheme of the storage definition; the actual buffer now starts ''behind''
the `ArrayBucket<I>` object, which thereby becomes a metadata managing header.
__To summarise the problem__: since we are maintaining a dynamically sized buffer,
and since we do not want to expose the actual element type through the
front-end object, we're necessarily bound to perform a raw-memory allocation.
This is denoted in bytes, and thus the allocator can no longer manage
the proper alignment automatically. Rather, we get a storage buffer with
just ''some accidental'' alignment, and we must care to request a sufficient
overhead to be able to shift the actual storage area forward to the next
proper alignment boundary. Obviously this also implies that we must
store this individual padding adjustment somewhere in the metadata,
in order to be able to report the correct size of the block later
on de-allocation.
2024-06-18 01:15:50 +02:00
|
|
|
|
2024-06-06 21:53:04 +02:00
|
|
|
Bucket*
|
Library: better handle the alignment issues explicitly
While there might be the possibility to use the magic of the standard library,
it seems prudent rather to handle this insidious problem explicitly,
to make clear what is going on here.
To allow for such explicit alignment handling, I have now changed the
scheme of the storage definition; the actual buffer now starts ''behind''
the `ArrayBucket<I>` object, which thereby becomes a metadata managing header.
__To summarise the problem__: since we are maintaining a dynamically sized buffer,
and since we do not want to expose the actual element type through the
front-end object, we're necessarily bound to perform a raw-memory allocation.
This is denoted in bytes, and thus the allocator can no longer manage
the proper alignment automatically. Rather, we get a storage buffer with
just ''some accidental'' alignment, and we must care to request a sufficient
overhead to be able to shift the actual storage area forward to the next
proper alignment boundary. Obviously this also implies that we must
store this individual padding adjustment somewhere in the metadata,
in order to be able to report the correct size of the block later
on de-allocation.
2024-06-18 01:15:50 +02:00
|
|
|
create (size_t cnt, size_t spread, size_t alignment =alignof(I))
|
2024-06-06 21:53:04 +02:00
|
|
|
{
|
Library: better handle the alignment issues explicitly
While there might be the possibility to use the magic of the standard library,
it seems prudent rather to handle this insidious problem explicitly,
to make clear what is going on here.
To allow for such explicit alignment handling, I have now changed the
scheme of the storage definition; the actual buffer now starts ''behind''
the `ArrayBucket<I>` object, which thereby becomes a metadata managing header.
__To summarise the problem__: since we are maintaining a dynamically sized buffer,
and since we do not want to expose the actual element type through the
front-end object, we're necessarily bound to perform a raw-memory allocation.
This is denoted in bytes, and thus the allocator can no longer manage
the proper alignment automatically. Rather, we get a storage buffer with
just ''some accidental'' alignment, and we must care to request a sufficient
overhead to be able to shift the actual storage area forward to the next
proper alignment boundary. Obviously this also implies that we must
store this individual padding adjustment somewhere in the metadata,
in order to be able to report the correct size of the block later
on de-allocation.
2024-06-18 01:15:50 +02:00
|
|
|
REQUIRE (cnt);
|
|
|
|
|
REQUIRE (spread);
|
|
|
|
|
size_t storageBytes = Bucket::storageOffset + cnt*spread;
|
2024-06-19 17:35:46 +02:00
|
|
|
storageBytes += alignRes (alignment); // over-aligned data => reserve for alignment padding
|
Library: better handle the alignment issues explicitly
While there might be the possibility to use the magic of the standard library,
it seems prudent rather to handle this insidious problem explicitly,
to make clear what is going on here.
To allow for such explicit alignment handling, I have now changed the
scheme of the storage definition; the actual buffer now starts ''behind''
the `ArrayBucket<I>` object, which thereby becomes a metadata managing header.
__To summarise the problem__: since we are maintaining a dynamically sized buffer,
and since we do not want to expose the actual element type through the
front-end object, we're necessarily bound to perform a raw-memory allocation.
This is denoted in bytes, and thus the allocator can no longer manage
the proper alignment automatically. Rather, we get a storage buffer with
just ''some accidental'' alignment, and we must care to request a sufficient
overhead to be able to shift the actual storage area forward to the next
proper alignment boundary. Obviously this also implies that we must
store this individual padding adjustment somewhere in the metadata,
in order to be able to report the correct size of the block later
on de-allocation.
2024-06-18 01:15:50 +02:00
|
|
|
// Step-1 : acquire the raw storage buffer
|
2024-06-06 21:53:04 +02:00
|
|
|
std::byte* loc = AlloT::allocate (baseAllocator(), storageBytes);
|
Library: better handle the alignment issues explicitly
While there might be the possibility to use the magic of the standard library,
it seems prudent rather to handle this insidious problem explicitly,
to make clear what is going on here.
To allow for such explicit alignment handling, I have now changed the
scheme of the storage definition; the actual buffer now starts ''behind''
the `ArrayBucket<I>` object, which thereby becomes a metadata managing header.
__To summarise the problem__: since we are maintaining a dynamically sized buffer,
and since we do not want to expose the actual element type through the
front-end object, we're necessarily bound to perform a raw-memory allocation.
This is denoted in bytes, and thus the allocator can no longer manage
the proper alignment automatically. Rather, we get a storage buffer with
just ''some accidental'' alignment, and we must care to request a sufficient
overhead to be able to shift the actual storage area forward to the next
proper alignment boundary. Obviously this also implies that we must
store this individual padding adjustment somewhere in the metadata,
in order to be able to report the correct size of the block later
on de-allocation.
2024-06-18 01:15:50 +02:00
|
|
|
ENSURE (0 == size_t(loc) % alignof(void*));
|
|
|
|
|
|
|
|
|
|
size_t offset = (size_t(loc) + Bucket::storageOffset) % alignment;
|
|
|
|
|
if (offset) // padding needed to next aligned location
|
|
|
|
|
offset = alignment - offset;
|
|
|
|
|
offset += Bucket::storageOffset;
|
|
|
|
|
ASSERT (storageBytes - offset >= cnt*spread);
|
2024-06-07 22:50:24 +02:00
|
|
|
Bucket* bucket = reinterpret_cast<Bucket*> (loc);
|
2024-06-06 21:53:04 +02:00
|
|
|
|
|
|
|
|
using BucketAlloT = typename AlloT::template rebind_traits<Bucket>;
|
|
|
|
|
auto bucketAllo = adaptAllocator<Bucket>();
|
2024-06-19 19:40:03 +02:00
|
|
|
// Step-2 : construct the Bucket metadata | ▽ ArrayBucket ctor arg ▽
|
Library: better handle the alignment issues explicitly
While there might be the possibility to use the magic of the standard library,
it seems prudent rather to handle this insidious problem explicitly,
to make clear what is going on here.
To allow for such explicit alignment handling, I have now changed the
scheme of the storage definition; the actual buffer now starts ''behind''
the `ArrayBucket<I>` object, which thereby becomes a metadata managing header.
__To summarise the problem__: since we are maintaining a dynamically sized buffer,
and since we do not want to expose the actual element type through the
front-end object, we're necessarily bound to perform a raw-memory allocation.
This is denoted in bytes, and thus the allocator can no longer manage
the proper alignment automatically. Rather, we get a storage buffer with
just ''some accidental'' alignment, and we must care to request a sufficient
overhead to be able to shift the actual storage area forward to the next
proper alignment boundary. Obviously this also implies that we must
store this individual padding adjustment somewhere in the metadata,
in order to be able to report the correct size of the block later
on de-allocation.
2024-06-18 01:15:50 +02:00
|
|
|
try { BucketAlloT::construct (bucketAllo, bucket, storageBytes, offset, spread); }
|
2024-06-06 21:53:04 +02:00
|
|
|
catch(...)
|
|
|
|
|
{
|
|
|
|
|
AlloT::deallocate (baseAllocator(), loc, storageBytes);
|
|
|
|
|
throw;
|
|
|
|
|
}
|
2024-06-07 22:50:24 +02:00
|
|
|
return bucket;
|
2024-06-06 21:53:04 +02:00
|
|
|
};
|
|
|
|
|
|
Library: better handle the alignment issues explicitly
While there might be the possibility to use the magic of the standard library,
it seems prudent rather to handle this insidious problem explicitly,
to make clear what is going on here.
To allow for such explicit alignment handling, I have now changed the
scheme of the storage definition; the actual buffer now starts ''behind''
the `ArrayBucket<I>` object, which thereby becomes a metadata managing header.
__To summarise the problem__: since we are maintaining a dynamically sized buffer,
and since we do not want to expose the actual element type through the
front-end object, we're necessarily bound to perform a raw-memory allocation.
This is denoted in bytes, and thus the allocator can no longer manage
the proper alignment automatically. Rather, we get a storage buffer with
just ''some accidental'' alignment, and we must care to request a sufficient
overhead to be able to shift the actual storage area forward to the next
proper alignment boundary. Obviously this also implies that we must
store this individual padding adjustment somewhere in the metadata,
in order to be able to report the correct size of the block later
on de-allocation.
2024-06-18 01:15:50 +02:00
|
|
|
|
2024-06-06 21:53:04 +02:00
|
|
|
template<class E, typename...ARGS>
|
|
|
|
|
E&
|
|
|
|
|
createAt (Bucket* bucket, size_t idx, ARGS&& ...args)
|
|
|
|
|
{
|
|
|
|
|
REQUIRE (bucket);
|
|
|
|
|
using ElmAlloT = typename AlloT::template rebind_traits<E>;
|
|
|
|
|
auto elmAllo = adaptAllocator<E>();
|
2024-06-11 18:23:48 +02:00
|
|
|
E* loc = reinterpret_cast<E*> (& bucket->subscript (idx));
|
2024-06-06 21:53:04 +02:00
|
|
|
ElmAlloT::construct (elmAllo, loc, forward<ARGS> (args)...);
|
|
|
|
|
ENSURE (loc);
|
|
|
|
|
return *loc;
|
|
|
|
|
};
|
|
|
|
|
|
Library: better handle the alignment issues explicitly
While there might be the possibility to use the magic of the standard library,
it seems prudent rather to handle this insidious problem explicitly,
to make clear what is going on here.
To allow for such explicit alignment handling, I have now changed the
scheme of the storage definition; the actual buffer now starts ''behind''
the `ArrayBucket<I>` object, which thereby becomes a metadata managing header.
__To summarise the problem__: since we are maintaining a dynamically sized buffer,
and since we do not want to expose the actual element type through the
front-end object, we're necessarily bound to perform a raw-memory allocation.
This is denoted in bytes, and thus the allocator can no longer manage
the proper alignment automatically. Rather, we get a storage buffer with
just ''some accidental'' alignment, and we must care to request a sufficient
overhead to be able to shift the actual storage area forward to the next
proper alignment boundary. Obviously this also implies that we must
store this individual padding adjustment somewhere in the metadata,
in order to be able to report the correct size of the block later
on de-allocation.
2024-06-18 01:15:50 +02:00
|
|
|
|
2024-06-06 21:13:50 +02:00
|
|
|
template<class E>
|
|
|
|
|
void
|
2024-06-07 22:50:24 +02:00
|
|
|
destroy (ArrayBucket<I>* bucket)
|
2024-06-06 18:41:07 +02:00
|
|
|
{
|
2024-06-06 21:13:50 +02:00
|
|
|
REQUIRE (bucket);
|
Library: more stringent deleter logic
The setup for `ArrayBucket` is special, insofar it shell de-allocate itself,
which creates the danger of re-entrant calls, or to the contrary, the danger
to invoke this clean-up function without actually invoking the destructor.
These problems become relevant once the destructor function itself is statefull,
as is the case when embedding a non-trivial, instance bound allocator
to be used for the clean-up work. Using the new `lib::TrackingAllocator`
highlighted this potential problem, since the allocator maintains a use-count.
Thus I decided to move the »destruction mechanics« one level down into
a dedicated and well encapsulated base class; invoking ArrayBucket's destructor
thereby becomes the only way to trigger the clean-up, and even ElementFactory::destroy()
can now safely check if the destructor was already invoked, and otherwise
re-invoke itself through this embedded destructor function. Moreover,
as an additional safety measure, the actual destructor function is now
moved into the local stack frame of the object's destructor call, removing
any possibility for the de-allocation to interfere with the destructor
invokation itself
2024-06-18 18:15:58 +02:00
|
|
|
if (bucket->isArmed())
|
|
|
|
|
{ // ensure the bucket's destructor is invoked
|
|
|
|
|
// and in turn itself invokes this function
|
|
|
|
|
bucket->destroy();
|
|
|
|
|
return;
|
|
|
|
|
}
|
2024-06-09 23:45:24 +02:00
|
|
|
if (not is_trivially_destructible_v<E>)
|
|
|
|
|
{
|
|
|
|
|
size_t cnt = bucket->cnt;
|
|
|
|
|
using ElmAlloT = typename AlloT::template rebind_traits<E>;
|
|
|
|
|
auto elmAllo = adaptAllocator<E>();
|
|
|
|
|
for (size_t idx=0; idx<cnt; ++idx)
|
2024-06-11 18:23:48 +02:00
|
|
|
{
|
|
|
|
|
E* elm = reinterpret_cast<E*> (& bucket->subscript (idx));
|
|
|
|
|
ElmAlloT::destroy (elmAllo, elm);
|
|
|
|
|
}
|
2024-06-09 23:45:24 +02:00
|
|
|
}
|
2024-06-19 00:35:56 +02:00
|
|
|
size_t storageBytes = bucket->getAllocSize();
|
2024-06-07 22:50:24 +02:00
|
|
|
std::byte* loc = reinterpret_cast<std::byte*> (bucket);
|
|
|
|
|
AlloT::deallocate (baseAllocator(), loc, storageBytes);
|
2024-06-06 18:41:07 +02:00
|
|
|
};
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
2024-06-19 19:40:03 +02:00
|
|
|
/**
|
|
|
|
|
* Policy Mix-In used to adapt to the ElementFactory and Allocator.
|
|
|
|
|
* @tparam I Interface type (also used in the lib::Several<I> front-end
|
|
|
|
|
* @tparam E a common _element type_ to use by default
|
|
|
|
|
* @tparam ALO custom allocator template
|
|
|
|
|
*/
|
2024-06-08 21:58:04 +02:00
|
|
|
template<class I, class E, template<typename> class ALO>
|
2024-06-06 23:15:49 +02:00
|
|
|
struct AllocationPolicy
|
|
|
|
|
: ElementFactory<I, ALO>
|
|
|
|
|
{
|
|
|
|
|
using Fac = ElementFactory<I, ALO>;
|
2024-06-07 19:04:06 +02:00
|
|
|
using Bucket = ArrayBucket<I>;
|
|
|
|
|
|
|
|
|
|
using Fac::Fac; // pass-through ctor
|
2024-06-06 23:15:49 +02:00
|
|
|
|
2024-06-19 17:35:46 +02:00
|
|
|
/** by default assume that memory is practically unlimited... */
|
|
|
|
|
size_t static constexpr ALLOC_LIMIT = size_t(-1) / sizeof(E);
|
|
|
|
|
|
2024-06-19 19:40:03 +02:00
|
|
|
/// Extension point: able to adjust dynamically to the requested size?
|
2024-06-19 00:35:56 +02:00
|
|
|
bool canExpand(Bucket*, size_t){ return false; }
|
2024-06-08 01:51:32 +02:00
|
|
|
|
2024-06-07 19:04:06 +02:00
|
|
|
Bucket*
|
2024-06-08 22:44:07 +02:00
|
|
|
realloc (Bucket* data, size_t cnt, size_t spread)
|
2024-06-06 23:15:49 +02:00
|
|
|
{
|
Library: better handle the alignment issues explicitly
While there might be the possibility to use the magic of the standard library,
it seems prudent rather to handle this insidious problem explicitly,
to make clear what is going on here.
To allow for such explicit alignment handling, I have now changed the
scheme of the storage definition; the actual buffer now starts ''behind''
the `ArrayBucket<I>` object, which thereby becomes a metadata managing header.
__To summarise the problem__: since we are maintaining a dynamically sized buffer,
and since we do not want to expose the actual element type through the
front-end object, we're necessarily bound to perform a raw-memory allocation.
This is denoted in bytes, and thus the allocator can no longer manage
the proper alignment automatically. Rather, we get a storage buffer with
just ''some accidental'' alignment, and we must care to request a sufficient
overhead to be able to shift the actual storage area forward to the next
proper alignment boundary. Obviously this also implies that we must
store this individual padding adjustment somewhere in the metadata,
in order to be able to report the correct size of the block later
on de-allocation.
2024-06-18 01:15:50 +02:00
|
|
|
Bucket* newBucket = Fac::create (cnt, spread, alignof(E));
|
2024-06-08 22:44:07 +02:00
|
|
|
if (data)
|
2024-06-09 17:19:40 +02:00
|
|
|
try {
|
Library: more stringent deleter logic
The setup for `ArrayBucket` is special, insofar it shell de-allocate itself,
which creates the danger of re-entrant calls, or to the contrary, the danger
to invoke this clean-up function without actually invoking the destructor.
These problems become relevant once the destructor function itself is statefull,
as is the case when embedding a non-trivial, instance bound allocator
to be used for the clean-up work. Using the new `lib::TrackingAllocator`
highlighted this potential problem, since the allocator maintains a use-count.
Thus I decided to move the »destruction mechanics« one level down into
a dedicated and well encapsulated base class; invoking ArrayBucket's destructor
thereby becomes the only way to trigger the clean-up, and even ElementFactory::destroy()
can now safely check if the destructor was already invoked, and otherwise
re-invoke itself through this embedded destructor function. Moreover,
as an additional safety measure, the actual destructor function is now
moved into the local stack frame of the object's destructor call, removing
any possibility for the de-allocation to interfere with the destructor
invokation itself
2024-06-18 18:15:58 +02:00
|
|
|
newBucket->installDestructor (data->getDtor());
|
2024-06-09 17:19:40 +02:00
|
|
|
size_t elms = min (cnt, data->cnt);
|
|
|
|
|
for (size_t idx=0; idx<elms; ++idx)
|
|
|
|
|
moveElem(idx, data, newBucket);
|
|
|
|
|
data->destroy();
|
|
|
|
|
}
|
|
|
|
|
catch(...)
|
|
|
|
|
{ newBucket->destroy(); }
|
2024-06-08 22:44:07 +02:00
|
|
|
return newBucket;
|
2024-06-06 23:15:49 +02:00
|
|
|
}
|
2024-06-08 22:44:07 +02:00
|
|
|
|
|
|
|
|
void
|
|
|
|
|
moveElem (size_t idx, Bucket* src, Bucket* tar)
|
|
|
|
|
{
|
|
|
|
|
if constexpr (is_trivially_copyable_v<E>)
|
|
|
|
|
{
|
|
|
|
|
void* oldPos = & src->subscript(idx);
|
|
|
|
|
void* newPos = & tar->subscript(idx);
|
|
|
|
|
size_t amount = min (src->spread, tar->spread);
|
|
|
|
|
std::memmove (newPos, oldPos, amount);
|
|
|
|
|
}
|
|
|
|
|
else
|
2024-06-12 17:07:55 +02:00
|
|
|
if constexpr (is_nothrow_move_constructible_v<E>
|
|
|
|
|
or is_copy_constructible_v<E>)
|
2024-06-08 22:44:07 +02:00
|
|
|
{
|
2024-06-12 03:23:20 +02:00
|
|
|
E& oldElm = reinterpret_cast<E&> (src->subscript (idx));
|
2024-06-08 22:44:07 +02:00
|
|
|
Fac::template createAt<E> (tar, idx
|
2024-06-12 03:23:20 +02:00
|
|
|
,std::move_if_noexcept (oldElm));
|
2024-06-08 22:44:07 +02:00
|
|
|
}
|
2024-06-12 17:07:55 +02:00
|
|
|
else
|
|
|
|
|
{
|
|
|
|
|
NOTREACHED("realloc immovable type (neither trivially nor typed movable)");
|
|
|
|
|
// this alternative code section is very important, because it allows
|
|
|
|
|
// to instantiate this code even for »noncopyable« types, assuming that
|
|
|
|
|
// sufficient storage is reserved beforehand, and thus copying is irrelevant.
|
|
|
|
|
// For context: the std::vector impl. from libStdC++ is lacking this option.
|
|
|
|
|
}
|
2024-06-09 23:45:24 +02:00
|
|
|
tar->cnt = idx+1; // mark fill continuously for proper clean-up after exception
|
2024-06-08 22:44:07 +02:00
|
|
|
}
|
2024-06-06 23:15:49 +02:00
|
|
|
};
|
|
|
|
|
|
2024-06-19 19:40:03 +02:00
|
|
|
/** Default configuration to use heap memory for lib::Several */
|
2024-06-08 21:58:04 +02:00
|
|
|
template<class I, class E>
|
|
|
|
|
using HeapOwn = AllocationPolicy<I, E, std::allocator>;
|
2024-06-06 23:15:49 +02:00
|
|
|
|
2024-07-17 01:43:17 +02:00
|
|
|
}//(End) namespace several
|
2024-06-11 02:48:23 +02:00
|
|
|
|
2024-05-28 17:20:34 +02:00
|
|
|
|
2024-06-19 19:40:03 +02:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/*************************************************//**
|
|
|
|
|
* Builder to create and populate a lib::Several<I>.
|
|
|
|
|
* Content elements can be of the _interface type_ \a I,
|
|
|
|
|
* or the _default element type_ \a E. When possible, even
|
|
|
|
|
* elements of an ad-hoc given, unrelated type can be used.
|
|
|
|
|
* The expected standard usage is to place elements of a
|
|
|
|
|
* subclass of \a I — but in fact the only limitation is that
|
|
|
|
|
* later, when using the created lib::Several, all content
|
|
|
|
|
* will be accessed through a (forced) cast to type \a I.
|
|
|
|
|
* Data (and metadata) will be placed into an _extent,_ which
|
|
|
|
|
* lives at a different location, as managed by an Allocator
|
|
|
|
|
* (With default configuration, data is heap allocated).
|
|
|
|
|
* The expansion behaviour is similar to std::vector, meaning
|
|
|
|
|
* that the buffer grows with exponential stepping. However,
|
|
|
|
|
* other than std::vector, even non-copyable objects can be
|
|
|
|
|
* handled, using #reserve to prepare a suitable allocation.
|
|
|
|
|
* @warning due to the flexibility and possible low-level usage
|
|
|
|
|
* patterns, consistency checks may throw at runtime,
|
|
|
|
|
* when attempting to add an unsuitable element.
|
2024-05-12 19:05:50 +02:00
|
|
|
*/
|
2024-07-17 01:43:17 +02:00
|
|
|
template<class I ///< Interface or base type visible on resulting Several<I>
|
|
|
|
|
,class E =I ///< a subclass element element type (relevant when not trivially movable and destructible)
|
|
|
|
|
,template<class,class> class POL =allo::HeapOwn ///< Allocator policy template (parametrised `POL<I,E>`)
|
2024-06-04 23:24:11 +02:00
|
|
|
>
|
2024-05-28 04:03:51 +02:00
|
|
|
class SeveralBuilder
|
2024-06-12 17:59:56 +02:00
|
|
|
: private Several<I>
|
2024-06-06 18:41:07 +02:00
|
|
|
, util::MoveOnly
|
2024-07-17 01:43:17 +02:00
|
|
|
, POL<I,E>
|
2024-05-12 19:05:50 +02:00
|
|
|
{
|
2024-06-07 19:04:06 +02:00
|
|
|
using Coll = Several<I>;
|
2024-07-17 01:43:17 +02:00
|
|
|
using Policy = POL<I,E>;
|
2024-05-28 18:52:01 +02:00
|
|
|
|
2024-07-17 01:43:17 +02:00
|
|
|
using Bucket = several::ArrayBucket<I>;
|
2024-06-11 02:48:23 +02:00
|
|
|
using Deleter = typename Bucket::Deleter;
|
2024-06-08 01:51:32 +02:00
|
|
|
|
2024-05-12 19:05:50 +02:00
|
|
|
public:
|
2024-05-28 18:52:01 +02:00
|
|
|
SeveralBuilder() = default;
|
|
|
|
|
|
|
|
|
|
/** start Several build using a custom allocator */
|
2024-07-17 01:43:17 +02:00
|
|
|
template<typename...ARGS, typename = meta::enable_if<std::is_constructible<Policy,ARGS...>>>
|
2024-05-28 18:52:01 +02:00
|
|
|
SeveralBuilder (ARGS&& ...alloInit)
|
|
|
|
|
: Several<I>{}
|
2024-07-17 01:43:17 +02:00
|
|
|
, Policy{forward<ARGS> (alloInit)...}
|
2024-05-28 18:52:01 +02:00
|
|
|
{ }
|
|
|
|
|
|
|
|
|
|
|
2024-06-19 19:40:03 +02:00
|
|
|
|
2024-06-11 02:48:23 +02:00
|
|
|
/* ===== Builder API ===== */
|
|
|
|
|
|
2024-06-15 01:14:42 +02:00
|
|
|
/** cross-builder to use a custom allocator for the lib::Several container */
|
|
|
|
|
template<template<typename> class ALO =std::void_t
|
|
|
|
|
,typename...ARGS>
|
|
|
|
|
auto withAllocator (ARGS&& ...args);
|
|
|
|
|
|
|
|
|
|
|
2024-06-19 15:22:26 +02:00
|
|
|
/** ensure up-front that a desired capacity is allocated */
|
2024-06-12 20:35:26 +02:00
|
|
|
template<typename TY =E>
|
2024-05-28 18:52:01 +02:00
|
|
|
SeveralBuilder&&
|
2024-06-12 20:35:26 +02:00
|
|
|
reserve (size_t cntElm =1
|
|
|
|
|
,size_t elmSiz =reqSiz<TY>())
|
2024-05-28 18:52:01 +02:00
|
|
|
{
|
2024-06-19 15:22:26 +02:00
|
|
|
size_t extraElm = positiveDiff (cntElm, Coll::size());
|
2024-06-12 20:35:26 +02:00
|
|
|
ensureElementCapacity<TY> (elmSiz);
|
2024-06-19 15:22:26 +02:00
|
|
|
ensureStorageCapacity<TY> (elmSiz,extraElm);
|
2024-06-12 20:35:26 +02:00
|
|
|
elmSiz = max (elmSiz, Coll::spread());
|
|
|
|
|
adjustStorage (cntElm, elmSiz);
|
2024-05-28 18:52:01 +02:00
|
|
|
return move(*this);
|
|
|
|
|
}
|
|
|
|
|
|
2024-06-19 19:40:03 +02:00
|
|
|
/** discard excess reserve capacity.
|
|
|
|
|
* @warning typically this requires re-allocation and copy
|
|
|
|
|
*/
|
|
|
|
|
SeveralBuilder&&
|
|
|
|
|
shrinkFit()
|
|
|
|
|
{
|
|
|
|
|
if (not Coll::empty()
|
|
|
|
|
or size() < capacity())
|
|
|
|
|
fitStorage();
|
|
|
|
|
return move(*this);
|
|
|
|
|
}
|
|
|
|
|
|
2024-06-15 01:14:42 +02:00
|
|
|
/** append copies of one or several arbitrary elements */
|
2024-06-11 23:22:00 +02:00
|
|
|
template<typename VAL, typename...VALS>
|
|
|
|
|
SeveralBuilder&&
|
|
|
|
|
append (VAL&& val, VALS&& ...vals)
|
|
|
|
|
{
|
|
|
|
|
emplace<VAL> (forward<VAL> (val));
|
|
|
|
|
if constexpr (0 < sizeof...(VALS))
|
|
|
|
|
return append (forward<VALS> (vals)...);
|
|
|
|
|
else
|
|
|
|
|
return move(*this);
|
|
|
|
|
}
|
|
|
|
|
|
2024-06-15 01:14:42 +02:00
|
|
|
/** append a copy of all values exposed through an iterator */
|
2024-05-28 18:52:01 +02:00
|
|
|
template<class IT>
|
|
|
|
|
SeveralBuilder&&
|
|
|
|
|
appendAll (IT&& data)
|
|
|
|
|
{
|
2024-06-08 01:51:32 +02:00
|
|
|
explore(data).foreach ([this](auto it){ emplaceCopy(it); });
|
2024-05-28 18:52:01 +02:00
|
|
|
return move(*this);
|
|
|
|
|
}
|
|
|
|
|
|
2024-06-11 18:23:48 +02:00
|
|
|
template<class X>
|
|
|
|
|
SeveralBuilder&&
|
|
|
|
|
appendAll (std::initializer_list<X> ili)
|
|
|
|
|
{
|
|
|
|
|
using Val = typename meta::Strip<X>::TypeReferred;
|
|
|
|
|
for (Val const& x : ili)
|
|
|
|
|
emplaceNewElm<Val> (x);
|
|
|
|
|
return move(*this);
|
|
|
|
|
}
|
|
|
|
|
|
2024-06-15 01:14:42 +02:00
|
|
|
/** emplace a number of elements of the defined element type \a E */
|
2024-06-11 18:23:48 +02:00
|
|
|
template<typename...ARGS>
|
|
|
|
|
SeveralBuilder&&
|
|
|
|
|
fillElm (size_t cntNew, ARGS&& ...args)
|
|
|
|
|
{
|
|
|
|
|
for ( ; 0<cntNew; --cntNew)
|
2024-06-12 03:23:20 +02:00
|
|
|
emplaceNewElm<E> (forward<ARGS> (args)...);
|
2024-06-11 18:23:48 +02:00
|
|
|
return move(*this);
|
|
|
|
|
}
|
|
|
|
|
|
2024-06-15 01:14:42 +02:00
|
|
|
/** create a new content element within the managed storage */
|
2024-06-11 18:23:48 +02:00
|
|
|
template<class TY, typename...ARGS>
|
|
|
|
|
SeveralBuilder&&
|
|
|
|
|
emplace (ARGS&& ...args)
|
|
|
|
|
{
|
2024-06-11 23:22:00 +02:00
|
|
|
using Val = typename meta::Strip<TY>::TypeReferred;
|
2024-06-11 18:23:48 +02:00
|
|
|
emplaceNewElm<Val> (forward<ARGS> (args)...);
|
|
|
|
|
return move(*this);
|
|
|
|
|
}
|
|
|
|
|
|
2024-06-11 02:48:23 +02:00
|
|
|
|
2024-06-19 19:40:03 +02:00
|
|
|
/***********************************************************//**
|
2024-06-11 02:48:23 +02:00
|
|
|
* Terminal Builder: complete and lock the collection contents.
|
|
|
|
|
* @note the SeveralBuilder is sliced away, effectively
|
|
|
|
|
* returning only the pointer to the ArrayBucket.
|
|
|
|
|
*/
|
2024-05-28 04:03:51 +02:00
|
|
|
Several<I>
|
|
|
|
|
build()
|
2024-05-12 19:05:50 +02:00
|
|
|
{
|
2024-05-28 18:07:08 +02:00
|
|
|
return move (*this);
|
2024-05-12 19:05:50 +02:00
|
|
|
}
|
2024-05-28 18:52:01 +02:00
|
|
|
|
2024-06-12 20:35:26 +02:00
|
|
|
size_t size() const { return Coll::size(); }
|
|
|
|
|
bool empty() const { return Coll::empty();}
|
|
|
|
|
size_t capacity() const { return Coll::storageBuffSiz() / Coll::spread(); }
|
|
|
|
|
size_t capReserve() const { return capacity() - size(); }
|
2024-06-11 18:23:48 +02:00
|
|
|
|
2024-07-07 04:20:46 +02:00
|
|
|
/** allow to peek into data emplaced thus far...
|
|
|
|
|
* @warning contents may be re-allocated until the final \ref build()
|
|
|
|
|
*/
|
|
|
|
|
I&
|
|
|
|
|
operator[] (size_t idx)
|
|
|
|
|
{
|
|
|
|
|
if (idx >= Coll::size())
|
|
|
|
|
throw err::Invalid{_Fmt{"Access index %d >= size(%d)."}
|
|
|
|
|
% idx % Coll::size()
|
|
|
|
|
,LERR_(INDEX_BOUNDS)
|
|
|
|
|
};
|
|
|
|
|
return Coll::operator[] (idx);
|
|
|
|
|
}
|
|
|
|
|
|
2024-06-11 18:23:48 +02:00
|
|
|
|
2024-06-19 19:40:03 +02:00
|
|
|
private: /* ========= Implementation of element placement ================ */
|
2024-06-08 23:59:47 +02:00
|
|
|
template<class IT>
|
|
|
|
|
void
|
|
|
|
|
emplaceCopy (IT& dataSrc)
|
|
|
|
|
{
|
2024-06-11 18:23:48 +02:00
|
|
|
using Val = typename IT::value_type;
|
|
|
|
|
emplaceNewElm<Val> (*dataSrc);
|
2024-06-08 23:59:47 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
template<class TY, typename...ARGS>
|
|
|
|
|
void
|
2024-06-11 18:23:48 +02:00
|
|
|
emplaceNewElm (ARGS&& ...args)
|
2024-06-08 23:59:47 +02:00
|
|
|
{
|
2024-06-11 23:22:00 +02:00
|
|
|
static_assert (is_object_v<TY> and not (is_const_v<TY> or is_volatile_v<TY>));
|
2024-06-19 19:40:03 +02:00
|
|
|
|
2024-06-12 20:35:26 +02:00
|
|
|
probeMoveCapability<TY>(); // mark when target type is not (trivially) movable
|
|
|
|
|
ensureElementCapacity<TY>(); // sufficient or able to adapt spread
|
|
|
|
|
ensureStorageCapacity<TY>(); // sufficient or able to grow buffer
|
2024-06-09 23:45:24 +02:00
|
|
|
|
2024-06-11 00:57:24 +02:00
|
|
|
size_t elmSiz = reqSiz<TY>();
|
2024-06-08 23:59:47 +02:00
|
|
|
size_t newPos = Coll::size();
|
|
|
|
|
size_t newCnt = Coll::empty()? INITIAL_ELM_CNT : newPos+1;
|
|
|
|
|
adjustStorage (newCnt, max (elmSiz, Coll::spread()));
|
2024-06-09 17:19:40 +02:00
|
|
|
ENSURE (Coll::data_);
|
2024-06-09 23:45:24 +02:00
|
|
|
ensureDeleter<TY>();
|
2024-07-17 01:43:17 +02:00
|
|
|
Policy::template createAt<TY> (Coll::data_, newPos, forward<ARGS> (args)...);
|
2024-06-09 23:45:24 +02:00
|
|
|
Coll::data_->cnt = newPos+1;
|
|
|
|
|
}
|
|
|
|
|
|
2024-06-12 17:59:56 +02:00
|
|
|
/** ensure clean-up can be handled properly.
|
|
|
|
|
* @throw err::Invalid when \a TY requires a different style
|
|
|
|
|
* of deleter than was established for this instance */
|
2024-06-09 23:45:24 +02:00
|
|
|
template<class TY>
|
|
|
|
|
void
|
|
|
|
|
ensureDeleter()
|
|
|
|
|
{
|
2024-06-11 02:48:23 +02:00
|
|
|
Deleter deleterFunctor = selectDestructor<TY>();
|
Library: more stringent deleter logic
The setup for `ArrayBucket` is special, insofar it shell de-allocate itself,
which creates the danger of re-entrant calls, or to the contrary, the danger
to invoke this clean-up function without actually invoking the destructor.
These problems become relevant once the destructor function itself is statefull,
as is the case when embedding a non-trivial, instance bound allocator
to be used for the clean-up work. Using the new `lib::TrackingAllocator`
highlighted this potential problem, since the allocator maintains a use-count.
Thus I decided to move the »destruction mechanics« one level down into
a dedicated and well encapsulated base class; invoking ArrayBucket's destructor
thereby becomes the only way to trigger the clean-up, and even ElementFactory::destroy()
can now safely check if the destructor was already invoked, and otherwise
re-invoke itself through this embedded destructor function. Moreover,
as an additional safety measure, the actual destructor function is now
moved into the local stack frame of the object's destructor call, removing
any possibility for the de-allocation to interfere with the destructor
invokation itself
2024-06-18 18:15:58 +02:00
|
|
|
if (Coll::data_->isArmed()) return;
|
|
|
|
|
Coll::data_->installDestructor (move (deleterFunctor));
|
2024-06-08 23:59:47 +02:00
|
|
|
}
|
|
|
|
|
|
2024-06-12 20:35:26 +02:00
|
|
|
/** ensure sufficient element capacity or the ability to adapt element spread */
|
|
|
|
|
template<class TY>
|
|
|
|
|
void
|
|
|
|
|
ensureElementCapacity (size_t requiredSiz =reqSiz<TY>())
|
|
|
|
|
{
|
|
|
|
|
if (Coll::spread() < requiredSiz and not (Coll::empty() or canWildMove()))
|
|
|
|
|
throw err::Invalid{_Fmt{"Unable to place element of type %s (size=%d)"
|
|
|
|
|
"into Several-container for element size %d."}
|
|
|
|
|
% util::typeStr<TY>() % requiredSiz % Coll::spread()};
|
|
|
|
|
}
|
|
|
|
|
|
2024-06-19 15:22:26 +02:00
|
|
|
/** ensure sufficient storage reserve for \a newElms or verify the ability to re-allocate */
|
2024-06-12 20:35:26 +02:00
|
|
|
template<class TY>
|
|
|
|
|
void
|
|
|
|
|
ensureStorageCapacity (size_t requiredSiz =reqSiz<TY>(), size_t newElms =1)
|
|
|
|
|
{
|
|
|
|
|
if (not (Coll::empty()
|
|
|
|
|
or Coll::hasReserve (requiredSiz, newElms)
|
2024-07-17 01:43:17 +02:00
|
|
|
or Policy::canExpand (Coll::data_, requiredSiz*(Coll::size() + newElms))
|
2024-06-12 20:35:26 +02:00
|
|
|
or canDynGrow()))
|
|
|
|
|
throw err::Invalid{_Fmt{"Several-container is unable to accommodate further element of type %s; "
|
|
|
|
|
"storage reserve (%d bytes ≙ %d elms) exhausted and unable to move "
|
|
|
|
|
"elements of mixed unknown detail type, which are not trivially movable." }
|
|
|
|
|
% util::typeStr<TY>() % Coll::storageBuffSiz() % capacity()};
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/** possibly grow storage and re-arrange elements to accommodate desired capacity */
|
2024-05-28 18:52:01 +02:00
|
|
|
void
|
|
|
|
|
adjustStorage (size_t cnt, size_t spread)
|
|
|
|
|
{
|
2024-06-07 19:04:06 +02:00
|
|
|
size_t demand{cnt*spread};
|
2024-06-12 17:59:56 +02:00
|
|
|
size_t buffSiz{Coll::storageBuffSiz()};
|
2024-06-08 23:59:47 +02:00
|
|
|
if (demand == buffSiz)
|
|
|
|
|
return;
|
|
|
|
|
if (demand > buffSiz)
|
|
|
|
|
{// grow into exponentially expanded new allocation
|
2024-06-12 20:35:26 +02:00
|
|
|
if (spread > Coll::spread())
|
|
|
|
|
cnt = max (cnt, buffSiz / Coll::spread()); // retain reserve
|
2024-06-19 17:35:46 +02:00
|
|
|
size_t overhead = sizeof(Bucket) + alignRes(alignof(E));
|
2024-06-08 23:59:47 +02:00
|
|
|
size_t safetyLim = LUMIERA_MAX_ORDINAL_NUMBER * Coll::spread();
|
2024-06-19 17:35:46 +02:00
|
|
|
size_t expandAlloc = min (positiveDiff (min (safetyLim
|
2024-07-17 01:43:17 +02:00
|
|
|
,Policy::ALLOC_LIMIT)
|
2024-06-19 17:35:46 +02:00
|
|
|
,overhead)
|
2024-06-12 20:35:26 +02:00
|
|
|
,max (2*buffSiz, cnt*spread));
|
2024-06-19 17:35:46 +02:00
|
|
|
// round down to an even number of elements
|
|
|
|
|
size_t newCnt = expandAlloc / spread;
|
|
|
|
|
expandAlloc = newCnt * spread;
|
2024-06-08 23:59:47 +02:00
|
|
|
if (expandAlloc < demand)
|
|
|
|
|
throw err::State{_Fmt{"Storage expansion for Several-collection "
|
|
|
|
|
"exceeds safety limit of %d bytes"} % safetyLim
|
|
|
|
|
,LERR_(SAFETY_LIMIT)};
|
|
|
|
|
// allocate new storage block...
|
2024-07-17 01:43:17 +02:00
|
|
|
Coll::data_ = Policy::realloc (Coll::data_, newCnt,spread);
|
2024-06-08 23:59:47 +02:00
|
|
|
}
|
2024-06-07 19:04:06 +02:00
|
|
|
ENSURE (Coll::data_);
|
2024-06-11 02:48:23 +02:00
|
|
|
if (canWildMove() and spread != Coll::spread())
|
2024-05-29 01:01:16 +02:00
|
|
|
adjustSpread (spread);
|
2024-06-07 19:04:06 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void
|
|
|
|
|
fitStorage()
|
|
|
|
|
{
|
2024-06-19 19:40:03 +02:00
|
|
|
REQUIRE (not Coll::empty());
|
2024-07-17 01:43:17 +02:00
|
|
|
if (not (Policy::canExpand (Coll::data_, Coll::size())
|
2024-06-19 19:40:03 +02:00
|
|
|
or canDynGrow()))
|
2024-06-11 02:48:23 +02:00
|
|
|
throw err::Invalid{"Unable to shrink storage for Several-collection, "
|
|
|
|
|
"since at least one element can not be moved."};
|
2024-07-17 01:43:17 +02:00
|
|
|
Coll::data_ = Policy::realloc (Coll::data_, Coll::size(), Coll::spread());
|
2024-05-29 01:01:16 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/** move existing data to accommodate spread */
|
|
|
|
|
void
|
|
|
|
|
adjustSpread (size_t newSpread)
|
|
|
|
|
{
|
2024-06-07 19:04:06 +02:00
|
|
|
REQUIRE (Coll::data_);
|
2024-06-12 17:59:56 +02:00
|
|
|
REQUIRE (newSpread * Coll::size() <= Coll::storageBuffSiz());
|
2024-06-07 23:02:51 +02:00
|
|
|
size_t oldSpread = Coll::spread();
|
2024-05-29 01:01:16 +02:00
|
|
|
if (newSpread > oldSpread)
|
|
|
|
|
// need to spread out
|
2024-06-07 22:23:06 +02:00
|
|
|
for (size_t i=Coll::size()-1; 0<i; --i)
|
2024-05-29 01:01:16 +02:00
|
|
|
shiftStorage (i, oldSpread, newSpread);
|
|
|
|
|
else
|
|
|
|
|
// attempt to condense spread
|
2024-06-07 22:23:06 +02:00
|
|
|
for (size_t i=1; i<Coll::size(); ++i)
|
2024-05-29 01:01:16 +02:00
|
|
|
shiftStorage (i, oldSpread, newSpread);
|
|
|
|
|
// data elements now spaced by new spread
|
2024-06-07 19:04:06 +02:00
|
|
|
Coll::data_->spread = newSpread;
|
2024-05-29 01:01:16 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
void
|
|
|
|
|
shiftStorage (size_t idx, size_t oldSpread, size_t newSpread)
|
|
|
|
|
{
|
|
|
|
|
REQUIRE (idx);
|
|
|
|
|
REQUIRE (oldSpread);
|
|
|
|
|
REQUIRE (newSpread);
|
2024-06-07 19:04:06 +02:00
|
|
|
REQUIRE (Coll::data_);
|
Library: better handle the alignment issues explicitly
While there might be the possibility to use the magic of the standard library,
it seems prudent rather to handle this insidious problem explicitly,
to make clear what is going on here.
To allow for such explicit alignment handling, I have now changed the
scheme of the storage definition; the actual buffer now starts ''behind''
the `ArrayBucket<I>` object, which thereby becomes a metadata managing header.
__To summarise the problem__: since we are maintaining a dynamically sized buffer,
and since we do not want to expose the actual element type through the
front-end object, we're necessarily bound to perform a raw-memory allocation.
This is denoted in bytes, and thus the allocator can no longer manage
the proper alignment automatically. Rather, we get a storage buffer with
just ''some accidental'' alignment, and we must care to request a sufficient
overhead to be able to shift the actual storage area forward to the next
proper alignment boundary. Obviously this also implies that we must
store this individual padding adjustment somewhere in the metadata,
in order to be able to report the correct size of the block later
on de-allocation.
2024-06-18 01:15:50 +02:00
|
|
|
byte* oldPos = Coll::data_->storage();
|
2024-05-29 01:01:16 +02:00
|
|
|
byte* newPos = oldPos;
|
|
|
|
|
oldPos += idx * oldSpread;
|
|
|
|
|
newPos += idx * newSpread;
|
|
|
|
|
std::memmove (newPos, oldPos, util::min (oldSpread,newSpread));
|
2024-05-28 18:52:01 +02:00
|
|
|
}
|
2024-06-11 02:48:23 +02:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/* ==== Logic do decide about possible element handling ==== */
|
|
|
|
|
|
|
|
|
|
enum DestructionMethod{ UNKNOWN
|
|
|
|
|
, TRIVIAL
|
|
|
|
|
, ELEMENT
|
|
|
|
|
, VIRTUAL
|
|
|
|
|
};
|
|
|
|
|
static Literal
|
|
|
|
|
render (DestructionMethod m)
|
|
|
|
|
{
|
|
|
|
|
switch (m)
|
|
|
|
|
{
|
|
|
|
|
case TRIVIAL: return "trivial";
|
|
|
|
|
case ELEMENT: return "fixed-element-type";
|
|
|
|
|
case VIRTUAL: return "virtual-baseclass";
|
|
|
|
|
default:
|
|
|
|
|
throw err::Logic{"unknown DestructionMethod"};
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
DestructionMethod destructor{UNKNOWN};
|
|
|
|
|
bool lock_move{false};
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
* Select a suitable method for invoking the element destructors
|
|
|
|
|
* and build a λ-object to be stored as deleter function alongside
|
|
|
|
|
* with the data; this includes a _copy_ of the embedded allocator,
|
|
|
|
|
* which in many cases is a monostate empty base class.
|
|
|
|
|
* @note this collection is _primed_ by the first element added,
|
|
|
|
|
* causing to lock into one of the possible destructor schemes;
|
|
|
|
|
* the reason is, we do not retain the information of the individual
|
|
|
|
|
* element types and thus we must employ one coherent scheme for all.
|
|
|
|
|
*/
|
|
|
|
|
template<typename TY>
|
|
|
|
|
Deleter
|
2024-06-12 17:07:55 +02:00
|
|
|
selectDestructor()
|
2024-06-11 02:48:23 +02:00
|
|
|
{
|
2024-07-17 01:43:17 +02:00
|
|
|
typename Policy::Fac& factory(*this);
|
2024-06-11 02:48:23 +02:00
|
|
|
|
|
|
|
|
if (is_Subclass<TY,I>() and has_virtual_destructor_v<I>)
|
|
|
|
|
{
|
|
|
|
|
__ensureMark<TY> (VIRTUAL);
|
|
|
|
|
return [factory](ArrayBucket<I>* bucket){ unConst(factory).template destroy<I> (bucket); };
|
|
|
|
|
}
|
|
|
|
|
if (is_trivially_destructible_v<TY>)
|
|
|
|
|
{
|
|
|
|
|
__ensureMark<TY> (TRIVIAL);
|
|
|
|
|
return [factory](ArrayBucket<I>* bucket){ unConst(factory).template destroy<TY> (bucket); };
|
|
|
|
|
}
|
|
|
|
|
if (is_same_v<TY,E> and is_Subclass<E,I>())
|
|
|
|
|
{
|
|
|
|
|
__ensureMark<TY> (ELEMENT);
|
|
|
|
|
return [factory](ArrayBucket<I>* bucket){ unConst(factory).template destroy<E> (bucket); };
|
|
|
|
|
}
|
|
|
|
|
throw err::Invalid{_Fmt{"Unsupported kind of destructor for element type %s."}
|
|
|
|
|
% util::typeStr<TY>()};
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
template<typename TY>
|
|
|
|
|
void
|
2024-06-12 00:57:57 +02:00
|
|
|
__ensureMark (DestructionMethod requiredKind)
|
2024-06-11 02:48:23 +02:00
|
|
|
{
|
2024-06-12 00:57:57 +02:00
|
|
|
if (destructor != UNKNOWN and destructor != requiredKind)
|
|
|
|
|
throw err::Invalid{_Fmt{"Unable to handle (%s-)destructor for element type %s, "
|
|
|
|
|
"since this container has been primed to use %s-destructors."}
|
|
|
|
|
% render(requiredKind)
|
|
|
|
|
% util::typeStr<TY>()
|
|
|
|
|
% render(destructor)};
|
|
|
|
|
destructor = requiredKind;
|
2024-06-11 02:48:23 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/** mark that we're about to accept an otherwise unknown type,
|
|
|
|
|
* which can not be trivially moved. This irrevocably disables
|
|
|
|
|
* relocations by low-level `memove` for this container instance */
|
|
|
|
|
template<typename TY>
|
|
|
|
|
void
|
|
|
|
|
probeMoveCapability()
|
|
|
|
|
{
|
|
|
|
|
if (not (is_same_v<TY,E> or is_trivially_copyable_v<TY>))
|
|
|
|
|
lock_move = true;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
bool
|
|
|
|
|
canWildMove()
|
|
|
|
|
{
|
|
|
|
|
return is_trivially_copyable_v<E> and not lock_move;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
bool
|
|
|
|
|
canDynGrow()
|
|
|
|
|
{
|
|
|
|
|
return not lock_move;
|
|
|
|
|
}
|
2024-05-12 19:05:50 +02:00
|
|
|
};
|
|
|
|
|
|
2024-05-28 18:52:01 +02:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2024-06-15 01:14:42 +02:00
|
|
|
/* ===== Helpers and convenience-functions for creating SeveralBuilder ===== */
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
namespace allo { // Setup for custom allocator policies
|
|
|
|
|
|
2024-07-07 23:56:10 +02:00
|
|
|
/**
|
|
|
|
|
* Extension point: how to configure the SeveralBuilder
|
|
|
|
|
* to use an allocator \a ALO, initialised by \a ARGS
|
|
|
|
|
* @note must define a nested type `Policy`,
|
|
|
|
|
* usable as policy mix-in for SeveralBuilder
|
|
|
|
|
* @remark the meaning of the template parameter is defined
|
|
|
|
|
* by the partial specialisations; notably it is possible
|
|
|
|
|
* to give `ALO ≔ std::void_t` and to infer the intended
|
|
|
|
|
* allocator type from the initialisation \a ARGS altogether.
|
|
|
|
|
* @see allocation-cluster.hpp
|
|
|
|
|
*/
|
2024-06-15 01:14:42 +02:00
|
|
|
template<template<typename> class ALO, typename...ARGS>
|
|
|
|
|
struct SetupSeveral;
|
|
|
|
|
|
2024-07-07 23:56:10 +02:00
|
|
|
/** Specialisation: use a _monostate_ allocator type \a ALO */
|
2024-06-15 01:14:42 +02:00
|
|
|
template<template<typename> class ALO>
|
|
|
|
|
struct SetupSeveral<ALO>
|
|
|
|
|
{
|
|
|
|
|
template<class I, class E>
|
|
|
|
|
using Policy = AllocationPolicy<I,E,ALO>;
|
|
|
|
|
};
|
|
|
|
|
|
2024-07-07 23:56:10 +02:00
|
|
|
/** Specialisation: store a C++ standard allocator instance,
|
|
|
|
|
* which can be used to allocate objects of type \a X */
|
2024-06-15 01:14:42 +02:00
|
|
|
template<template<typename> class ALO, typename X>
|
|
|
|
|
struct SetupSeveral<ALO, ALO<X>>
|
|
|
|
|
{
|
|
|
|
|
template<class I, class E>
|
|
|
|
|
struct Policy
|
|
|
|
|
: AllocationPolicy<I,E,ALO>
|
|
|
|
|
{
|
|
|
|
|
Policy (ALO<X> refAllocator)
|
|
|
|
|
: AllocationPolicy<I,E,ALO>(move(refAllocator))
|
|
|
|
|
{ }
|
|
|
|
|
};
|
|
|
|
|
};
|
|
|
|
|
//
|
|
|
|
|
}//(End)Allocator configuration
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
|
* @remarks this builder notation configures the new lib::Several container
|
|
|
|
|
* to perform memory management through a standard conformant allocation adapter.
|
|
|
|
|
* Moreover, optionally the behaviour can be configured through an extension point
|
|
|
|
|
* lib::allo::SetupSeveral, for which the custom allocator may provide an explicit
|
|
|
|
|
* template specialisation.
|
|
|
|
|
* @tparam ALO a C++ standard conformant allocator template, which can be instantiated
|
|
|
|
|
* for creating various data elements. Notably, this will be instantiated as
|
|
|
|
|
* `ALO<std::byte>` to create and destroy the memory buffer for content data
|
|
|
|
|
* @param args optional dependency wiring arguments, to be passed to the allocator
|
|
|
|
|
* @return a new empty SeveralBuilder, configured to use the custom allocator.
|
2024-06-19 19:40:03 +02:00
|
|
|
* @see lib::AllocationCluster (which provides a custom adaptation)
|
|
|
|
|
* @see SeveralBuilder_test::check_CustomAllocator()
|
2024-06-15 01:14:42 +02:00
|
|
|
*/
|
2024-07-17 01:43:17 +02:00
|
|
|
template<class I, class E, template<class,class> class POL>
|
2024-06-15 01:14:42 +02:00
|
|
|
template<template<typename> class ALO, typename...ARGS>
|
|
|
|
|
inline auto
|
|
|
|
|
SeveralBuilder<I,E,POL>::withAllocator (ARGS&& ...args)
|
2024-05-28 04:03:51 +02:00
|
|
|
{
|
2024-06-15 01:14:42 +02:00
|
|
|
if (not empty())
|
|
|
|
|
throw err::Logic{"lib::Several builder withAllocator() must be invoked "
|
|
|
|
|
"prior to adding any elements to the container"};
|
|
|
|
|
|
|
|
|
|
using Setup = allo::SetupSeveral<ALO,ARGS...>;
|
2024-07-17 01:43:17 +02:00
|
|
|
using BuilderWithAllo = SeveralBuilder<I,E, Setup::template Policy>;
|
2024-06-15 01:14:42 +02:00
|
|
|
|
|
|
|
|
return BuilderWithAllo(forward<ARGS> (args)...);
|
2024-05-28 04:03:51 +02:00
|
|
|
}
|
2024-05-12 19:05:50 +02:00
|
|
|
|
2024-06-15 01:14:42 +02:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/*********************************************************//**
|
|
|
|
|
* Entrance Point: start building a lib::Several instance
|
|
|
|
|
* @tparam I Interface type to use for element access
|
|
|
|
|
* @tparam E (optional) standard element implementation type
|
|
|
|
|
* @return a builder instance with methods to create or copy
|
|
|
|
|
* data elements to populate the container...
|
|
|
|
|
*/
|
2024-06-11 23:53:38 +02:00
|
|
|
template<typename I, typename E =I>
|
|
|
|
|
SeveralBuilder<I,E>
|
|
|
|
|
makeSeveral()
|
|
|
|
|
{
|
|
|
|
|
return SeveralBuilder<I,E>{};
|
|
|
|
|
}
|
|
|
|
|
|
2024-06-15 01:14:42 +02:00
|
|
|
template<typename X>
|
|
|
|
|
SeveralBuilder<X>
|
|
|
|
|
makeSeveral (std::initializer_list<X> ili)
|
|
|
|
|
{
|
|
|
|
|
return SeveralBuilder<X>{}
|
|
|
|
|
.reserve (ili.size())
|
|
|
|
|
.appendAll (ili);
|
|
|
|
|
}
|
|
|
|
|
|
2024-05-12 19:05:50 +02:00
|
|
|
|
|
|
|
|
} // namespace lib
|
|
|
|
|
#endif /*LIB_SEVERAL_BUILDER_H*/
|