Issue454586
Created on 2001-08-23.14:27:14 by anonymous, last changed 2001-10-06.20:12:26 by bckfnn.
Messages | |||
---|---|---|---|
msg402 (view) | Author: Nobody/Anonymous (nobody) | Date: 2001-08-23.14:27:14 | |
Unpickling of large lists or multiple loads give the wrong results. Attached is a example. Environment JAVA JDK 1.3, Windows 2000, jython 21a3 |
|||
msg403 (view) | Author: Nobody/Anonymous (nobody) | Date: 2001-08-23.14:31:24 | |
Logged In: NO Forgot the example: ------------------ import pickle pfile=open("ptest.pi","wb") p=pickle.Pickler(pfile) for l in range (1,10000): row=[str(l),str(l)] p.dump(row) pfile.close() print "reading" n=1 try: pfile=open("ptest.pi","rb") l=pickle.load(pfile) while l: comp = [str(n),str(n)] if l != comp: print "Pickle error" print str(l) + " should be " + str(comp) n=n+1 l=pickle.load(pfile) pfile.close() except EOFError: print "End reached, well done" pfile.close() |
|||
msg404 (view) | Author: Finn Bock (bckfnn) | Date: 2001-10-06.20:12:26 | |
Logged In: YES user_id=4201 To be correct, it is actually pickle.dump() that writes an incorrent pickle to the file. The problem is caused by bug #222789 where the id() of jython objects is not unique and pickle.py expects all id() values to be a unique integer. The bug does not exists in cPickle, so the workaround is to use that instead of pickle.py. A real fix might be possible but the solutions we have come up with so far, would increase memory use for all objects. It migth never be fixed for real. I'm going to close this report because the problem is already described in #222789. |
History | |||
---|---|---|---|
Date | User | Action | Args |
2001-08-23 14:27:14 | anonymous | create |
Supported by Python Software Foundation,
Powered by Roundup