socket transfer rate

Is there a way to increase the transfer rate of a TCP SOCKSTREAM socket. Cause im trying to send a file through sockets in chunks of 4kb, and the transfer is ok only that very small. What is the best way for sending files. By the way, is through a LAN the network.

Well the best way to transfer data is to do it the proper way ;-) You give very little information about exactly what you are seeing, what you are expecting and what your code is doing.

In most cases people have problem with TCP/IP because they miss the fact that it’s STREAMing protocols and that there is no such think as a packet size concept. For example if you do a send a 16k there is nothing that says the the receive will also be a single read of 16k. It could be 4 read of 4k or 8 read of 2k or a single 16k. Code that don’t deal with this problem usually breaks.


If your not having problems with the actual packets being varrying sizes (something Mario alluded to) but instead are having problems with speed of the connection I’d suggest you need to change the size of the socket buffer:

Check out SO_SNDBUF and SO_RCVBUF in the setsockopt() call. By default the buffer is only 16K so on speedy connections this is usually the limitation of how fast you can send/receive without losing packets due to buffer overflow and having to resend them. I’ve reset this up to 192K (I think that’s the max size as 256K failed) to improve transfer rates.


Thanks Tim and Mario. The problem is speed like Tim said. The code sends the file great and everything without any error. The only thing us that is very slow. Im going to try setting the socket options.

Wow I think I was sleepy or something when I wrote the original post,hahahaha. In the original post I meant
“and the transfer is ok, only that very SLOW” not small. Sorry,hehe.

Even SLOW can be very vague. Slow compared to what? Disk I/O? Can you compare it to a QNET file transfer? Bytes per second maybe? Do you have a busy network? Could your 100TX Ethernet be connected through a 10T hub?

what if you try FTP. I get 10.1 Mbyte/sec on a 100Mbits network (that is when writing the file to /dev/null)

An how to you defined SLOW? How do you measure transfer rate?

Thanks for the help guys. I tried what Tim said and it improve the speed a lot. When I said slow, was like sending a 8mb file and take like a minute to send it on a 100mbit LAN. Where the two computers where connected in the same 100mbit hub.


Glad it helped.

8 meg in 1 minute is brutally slow.

For example, I was testing our 1 Gig connection here on a private setup (2 computers connected via a hub) recently and transferred a 250 Meg file from a Ram drive to take the Disk access out of the equation. I was able to transfer the file in about 12 seconds giving a sustained rate of about 120 Mbits.


i still think there must be something wrong with your code, because default socket settings are just fine for near wire speed…
(maybe on a gigabit link the default buffers are somewhat smaller than optimal but im pretty sure about 100Mbit, i think you are hiding some other problem by expanding buffers)

Dibakle, is it possible for you to post your code which you use for sending and receiving the file?

Tim, im confused, you are saying you transferred a 250 Meg file, but by my calculations you should be able to transfer it in under 2.5 seconds. And instead of 120 Mbits i would expect something under 900 Mbits…

When you are dealing with very high frequency data transfer, such as Gigabit Ethernet, there is a counter-intuitive affect you should be aware of. Everything going through the wire goes at the speed of light. Higher bandwidth just crowds everything together. So small packets take just as long to traverse the cable as with 100Megabit or 10Megabit. So unless you are using very large ethernet packets, and hopefully not too many small protocol packets, things like ACK and NAK, you will get far less point to point throughput than the bandwidth might suggest. Switches, especially the store and forward type can also slow things down.
I’m sure about this, but I’ll bet you can only have one packet on the line at a time with 8 wire ethernet, although it is probably full duplex.

What then is the point of more bandwidth? Well, even if you can’t use all the bandwidth point to point, its still there for other nodes to use.

While it sounds a little low, I wouldn’t be surprised if 120Megabit is quite reasonable for ftp over TCP/IP.

My point was that he is changing SO_SNDBUF and SO_RCVBUF on a 100Mbit connection, which is pointless in my point of view.
And yes i agree that 120Mbit is reasonable for ftp since you are rather measuring speed of your filesystem and harddisk there.

Mezek, Maschoen,

In my setup I was trying to test two things:

  1. Speed of the ethernet connection in terms of max performance I might get
  2. Things which may/may not affect my application

To test part 1, I used the ftp client that comes with QNX. Originally I was ftping the 250 meg file from disk to the other machine (which was a dual Xeon server running XP). The initial transfers were taking on the order of 80 seconds which was about 14 Mbit speed which I thought was horrendously slow. I guessed the disk was the limiting factor (a Sata drive running in legacy IDE mode in the Bios which may have explained the slow disk access). So I created a Ram drive and copied the 250 meg file there. Then when I performed the ftp test I got the transfer time down to 12 seconds and hence got the 120 Mbit speed. It’s entirely possible at that time the dual Xeon XP server disk was the limiting factor in the transfer since I did not have a Windows Ram drive to copy to. In any case, the test told me I could get greater than 100 Mbit tranfer rate.

In part 2, when I was using my app with socket connections I found that setting SO_SNDBUF and SO_RCVBUF to 196K helped my app immensely in the data rate I could sustain in an worst cast scenario. Note, that my app sends lots of small packets (I am not transferring a file as the original poster of this thread was) back and forth and has to process them after receiving them. I was testing extreme cases for my app where I send about 10x the amount of normal traffic I expect to ever get (ie I send each message 10x in a for-loop which floods the driver/card/etherenet). So my guess is the large buffer helped with that flood. That why I suggested increasing the buffer to the original poster.

My guess is his app is either flooding the send side buffer in a for-loop or he may be doing some post processing on the packets on the receive side so that by the time he gets back to pulling the next one the buffer was over written and he had to request a re-send.


QNX can achieve very close to 120MB/sec on GigE (i.e. full theoretical wirespeed). I have done it myself; but of course, it doesn’t take a “rainman” to realize that transferring a 250MB file at 120MB/sec would take just over 2 seconds - not 12 - if you were getting 120MB/sec).

If you are really getting only 120Mbit, then you are only getting theoretical wire speed for 100Mbit (12.5MB/sec). You should be able to do much better than this (do you have a GigE switch? Is it a good one?).

rgallen, what would you guess an achievable transfer rate, Ramdisk to Ramdisk should be using ftp? Just curious.

Um, isn’t 120Mbit > 100 Mbit hence I am getting better than 100 Mbit ethernet? Afterall 12.5 Megs a second would take 20 seconds to transfer 250 Meg file.

The switch is definitely a GigE switch and the GigE lights came on both connections indicating both cards were operating in GigE mode.

As I said, the limiting factor at that point was very likely the Dual Xeon server hard drive speed because even SATA II can’t write 120 Megs of data in 2 seconds (maybe theoretically, but I don’t know any 7200 RPM drives that can do that).

My test was only designed to show that achieving a rate greater than 100 Mbit was possible which it did.


Nope. 10Mbit ethernet has a real wire rate of ~12Mbit and 100Mbit has a real wire rate of ~120Mbit. This is one of those rare cases where you actually get more than advertised :slight_smile:

Well it might not be able to write 120MB/sec, but it can certainly do more than 12.5MB/sec (a lot more - at least 50MB/sec) so I don’t think it was the bottleneck.

Depends on whether you meant more than what 100Mbit ethernet can do (which you didn’t show) or whether you meant more than 100Mbit (which you did show). My assumption was that you meant to show that you could achieve a transfer rate higher than what is possible with 100Mbit ethernet (and the fact that it capped right on the mark at 100Mbit EN wire speed is a little suspicious)

While a bit may be a bit, a byte is not a byte. It takes more than 8 bits to make a byte on on Ethernet. I don’t recall the exact value, but I think within the frame you have 10bits per byte, and there is additional overhead including framing, header, and line protocol stuff. So I think his point, getting more than 100Mbit Ethernet data throughput is correct.

Another consideration is that the file system will absorb writes much faster than the disk can write them, up to the size of the cache. This can be removed from experiment by writing a file many times larger than the cache.


Your right that what I wanted to achieve was a transfer rate > 100 Mbit. We are just trying to decide if GigE cards are worth putting in or not given we won’t have a lot of machines competing for the bandwidth (4 tops on our network). So the idea was to see if one or two machines could generate more than 100 Mbit of traffic and hence justify putting in GigE.

Now that you say I capped right at 100 Mbit it worries me that the test didn’t show what I intended it to so I may need to go back and re-do it.

What’s strange is that during the test runs the network use on both the XP and QNX machines hovered right around 10% or so which would on a GigE connection indicate 100 Mbit of bandwidth being used. Plus as I said, the switch lights indicating GigE connection were lit (I tested this worked by manually starting the driver under QNX in 100 Meg mode and the switch acknowledged this by turning the GigE light off).

So I am confused whether I did something wrong or if ftp just hit some kind of wall at 100 Mbit transfer rate.


Since I was transferring a 250 Meg file I am pretty sure I exceeded any caching that a hard drive could possible have.