My application runs a proprietary protocol over a standard HTTP over a TCP socket (bound to port 8080) which works perfectly. The normal round trip (measured by the application as well as by server sniffer) for the local network (UMTS) is around 400ms-600ms, except for the initial request/response where the round trip can measure up to 3 times as much (i.e. around 1800ms). Can anyone shed some light over this phenomena?
I also made sure that:
1. Latency is not due to TCP 3 way connection handshake (SYN and SYN/ACK) that is preformed before the request is issued when the TCP connection is established.
2. Latency is not due to DNS resolving because server runs a static IP
I will emphasize that the application I'm running has some real-time criteria meaning that the first request/response latency is crucial. Am I doing something wrong or is it normal network behavior? is it possible to reduce this latency? how?