-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New MultipartParser fails on files above 102400 bytes #1470
Comments
Looking at the code in a bit more detail, I believe (but haven't tested this) that the error would not occur if the upload consisted of a single file exceeding MEMFILE_MAX - which would instead cause it to become a NamedTemporaryFile and thus count against the much larger disk_limit - but rather only if you have multiple smaller files, each individually smaller than memfile_limit, that when taken together exceed mem_limit. |
The With the new parser, it's now enforced by the parser itself. The idea is that the sum of all in-memory buffers must not exceed this limit. This is not ideal yet because the limit is used for two different things: As a soft-limit of individual memory-buffered files before they are spooled to disk, and as the total hard-limit for memory consumption for all fields and files. Multiple small files that each stay below the limit can, in total, trigger the hard limit and cause an error. We should probably introduce more fine grained control here, and expose those settings to app developers. And also improve documentation. Limiting request parsers is important, but it should be possible to change the limits easily, if needed. |
Fully agree with everything above. As a stopgap measure until a longer-term fix is implemented, it would probably help to stick with the default value of 1mb for the mem_limit instead of passing in MEMFILE_MAX for it. Long term, making mem_limit (and probably memfile_limit) configurable and having the behavior documented for file uploads would definitely be desirable, but thinking about it I realized even that would only delay the problem to a (now developer-selectable) point in the future. The most elegant solution I can come up with would be to not have exceeding mem_limit be an instant abort of the parse, but instead just force all subsequent files in the parse to be stored on disk instead of being memory buffered. I think that's a practical fallback that prevents memory exhaustion while still allowing uploads with numerous smaller files. |
Yes, that's an idea, but there is another problem. Bottle treats text fields that exceed Hmm, maybe just scrap the total hard memory limit and rely on |
Affected versions: 0.13+ (tested in 0.13.2, issue is still present in code for 0.14)
Works in 0.12.25
The new MultipartParser released in Version 0.13 raises a MultipartError("Memory limit reached.") when the total file size exceeds 102400 bytes. This is due to MEMFILE_MAX being passed in (possibly mistakenly?) as the memory limit on line 1353.
The default value for this parameter would be 2 ** 20 i.e. ~1mb, which is still fairly low but more reasonable than 100kb. Ideally though this should be either dependent on actual memory available or configurable, perhaps via a kwarg to run().
The text was updated successfully, but these errors were encountered: