Mathew Asselin <email@example.com> wrote:
Do you know if there are any limitations with respect to the size of a
file when using fwrite()? Does write() have less limitations?
No, there aren’t any such restrictions. Well, not quite true, there
is a restriction on file size at the 2G mark. And, these things do
take size_t parameters that are signed integers, again, a 2G limitation.
There is nothing at the 4K-8K range.
#define TEST_SIZE 8192
int main(int argc, char *argv)
char *ptr, *ptr1;
ptr = malloc( TEST_SIZE );
if( !ptr )
memset(ptr, ‘a’, TEST_SIZE );
outfile = fopen( “/tmp/blah”, “w” );
if( !outfile )
ptr1 = ptr;
for( i=0; i<TEST_SIZE; i++ )
fwrite( ptr1, 1, 1, outfile );
fclose( outfile );
Uses the exact same structure around fwrite – one byte at a time,
writing out 8K of single bytes, and it works happily – generating
an 8K output file.
So, I repeat, the error is in the pointers, or your manipulation
In fact, are you sure the crash is even in this block of code?
Have you tried getting a core dump and loading it into a
debugger to see where it actually died?
Please follow-up to newsgroup, rather than personal email.
QNX Training Services