Is this a use case rather than an issue? Looking for community best practices. We can paraphrase this as a use case rather than moving the issue over
Over 100TB of data
Just use a bucket as a storage root? S3 will scale a bucket based on usage, so lots of data or a usage spike might not perform well until more resources are added, but this is only a problem if usage is spikey.
Possible to create performance hotspots in a bucket but it is unlikely to be an issue
If there was a fixed name for the inventory file it would be easier to find, particularly in object store implementations
Can we provide a hint with a storage-root level statement via an extension?
To be discussed at the next editors meeting
Extensions
Recent updates
There have been a couple changes to parameters:
They are now assigned types directly calling on JSON types.
What used to be range is now a constraints field - the guidance is "whatever makes sense
for the extension you're creating"
The config file now always has the same name: config.json
"Publishing released versions of extension 'specification'" - issue#37
Release versions of the extensions specification (i.e. the README)
Add a field to extension definition headers that notes the version of the extensions specification to which an extension conforms, similar to the "Minimum OCFL Version" header field.
Editors to take this up and deal with the specifics