I would like to copy only a certain portion of one Big buffer to a smaller buffer. Is there any porformance difference when I use a memcpy vs just doing a straight assignment copy? ie.
From experience memcpy can be faster because it contains some optimisation that aren’t in the for loop. Unless you call this for loop a LOTS of time, it wouldn’t worry about the difference though. Memcpy has the advantage of being not affected by compiler flag since it’s a function, but then again depending on compiler option the memcpy routine may be generated inline.
I’m curious as to what “memcpy is threadsafe” means. If it means that
two threads doing memcpy to the same area are single threaded, then I’m
a bit worried. That would suggest that there are two OS calls bracketing the
move, which would definitely slow things down.
If that’s not the case, I’m wondering how the user code would not be thread safe.
Moving a byte to memory is an atomic operation, and threads don’t interfere with
each other’s register assignments.
In this case I don’t think the threadsafe status defines what happens if the destination and source are the same. In this case the issue is the same as having multiple processes run memcpy to and from the same sharedmem, all hell will break loose . It’s up to the application to deal with it data synchronisation.
This is as I suspected. From these comments, memcpy() is only
threadsafe in the sense that if calling it in single threaded environment won’t sigsegv or return an error, calling it in a multi-threaded environment won’t either. This seems pretty mundane given that the results (without proper synchronization) could be completely unpredictable.
however, all c standard functions (ansi/unix/posix/whatever) and probably most of system calls are behaving this way: It is responsibility of the user to call the function with proper arguments (including proper synchronization in multi threaded or smp environment)