Message6857
We can't really increase the mark limit - it causes out of memory errors pretty quickly in our regression tests if it is large, which leads me to believe that it can be leaky. Routinely rejecting >100K files isn't what we want either - but I'm not sure of a way to fix this yet. The two uses we have are checking the very beginning of the file for unicode markers - these we could replace with a pushback inputstream (http://docs.oracle.com/javase/1.4.2/docs/api/java/io/PushbackInputStream.html)
The harder part is printing lines for parse errors - we might still be able to use a pushback input stream for this too, but it would take some investigation. If anyone would like to try to submit a patch for this I'd be more than willing to review it. |
|
Date |
User |
Action |
Args |
2012-03-19 20:16:16 | fwierzbicki | set | messageid: <1332188176.1.0.0897048060805.issue1744@psf.upfronthosting.co.za> |
2012-03-19 20:16:16 | fwierzbicki | set | recipients:
+ fwierzbicki, amak, reljicb, philba, jtrim |
2012-03-19 20:16:16 | fwierzbicki | link | issue1744 messages |
2012-03-19 20:16:15 | fwierzbicki | create | |
|