Bump maxTocSize to 150 MB
We have seen an image with: - total size 1.43 GB - uncompressed zstd:chunked manifest size of 91.7 MB - uncompressed tar-split size (not constrained by maxTocSize) 310 MB Without more infrastructure, we are just guessing about what the system we are running on can support, so, for now, *shrug*, bump the number. Eventually we should stream the data from/to disk, making this much less relevant; that makes building the infrastructure to estimate available memory unattractive. Signed-off-by: Miloslav Trmač <mitr@redhat.com>
This commit is contained in:
parent
3d0b647c63
commit
929f785f43
|
|
@ -23,7 +23,7 @@ import (
|
|||
const (
|
||||
// maxTocSize is the maximum size of a blob that we will attempt to process.
|
||||
// It is used to prevent DoS attacks from layers that embed a very large TOC file.
|
||||
maxTocSize = (1 << 20) * 50
|
||||
maxTocSize = (1 << 20) * 150
|
||||
)
|
||||
|
||||
var typesToTar = map[string]byte{
|
||||
|
|
|
|||
Loading…
Reference in New Issue