nicomb on Mon, 01 Jul 2019 09:04:22
I have new Azure redis-cache app and I've been monitoring it for days now. I'm checking how my redis app is performing by using redis-cli command "info"
Everything looks fine except for the total_connections_received stats.
It says that my total_connections_received:4020498 at the moment and it keeps growing rapidly.
Here is what the redis documentations says :
total_connections_received: Total number of connections accepted by the server
2:00 P.M : total_connections_received:4020498
3:00 P.M : total_connections_received:4027593
after an hour it grows 7,095
I also checked the application that connects to my redis app and it doesn't flood the connection. On average there is 50 to 150 connection per day and so I'm sure it is not my application fault.
Why is my total_connections_received increasing rapidly?
Mike Ubezzi (Azure) on Tue, 02 Jul 2019 00:31:44
Are you releasing open connections before creating new connections? It doesn't appear to be the case. Which client language are you leveraging with your solution? Please take a look at the following best practices (link).
- Reuse connections - Creating new connections is expensive and increases latency, so reuse connections as much as possible. If you choose to create new connections, make sure to close the old connections before you release them (even in managed memory languages like .NET or Java).
Please let us know if you require additional information or if there appears to be a different issue here.
nicomb on Tue, 02 Jul 2019 02:09:26
Thank you for your response.
What do you mean by "closing before release"? because I'm releasing all redis resources before creating new connection.
We have a client server that PUBLISH data in redis and a client SUBSCRIBER.
When I'm checking the stats, my connected_client is zero.
also when checking the "MONITOR" redis cli command everything is normal. no connection flooding. but the "INFO" command total_connections_received increments rapidly.
Mike Ubezzi (Azure) on Mon, 15 Jul 2019 23:34:09
Can you please send me your Subscription ID and Redis Cache instance me and I can have a service engineer take a look at this if you are still having an issue. You can send this to AzCommunity. If you found a solution please do post that for the benefit of others who are looking for the same.
Mike Ubezzi (Azure) on Wed, 17 Jul 2019 13:32:01
I am adding one last bit of information that is helpful. First, I want to confirm which Redis client you are using? If you are using the StackExchange.Redis client library, the following Best Partitices are a guideline (link).
Can you provide you client code so as to investigate this further, if you are using another client library?
The following guidance is suggested if you are using the StackExchange.Redis library:
AbortConnectto false, then let the ConnectionMultiplexer reconnect automatically. See here for details.
Reuse the ConnectionMultiplexer - do not create a new one for each request. The
Lazy<ConnectionMultiplexer>pattern shown here is recommended.
- Redis works best with smaller values, so consider chopping up bigger data into multiple keys. In this Redis discussion, 100 kb is considered large. Read this article for an example problem that can be caused by large values.
- Configure your ThreadPool settings to avoid timeouts.
- Use at least the default connectTimeout of 5 seconds. This interval gives StackExchange.Redis sufficient time to re-establish the connection in the event of a network blip.
Be aware of the performance costs associated with different operations you are running. For instance, the
KEYScommand is an O(n) operation and should be avoided. The redis.io site has details around the time complexity for each operation that it supports. Click each command to see the complexity for each operation.
Additionaly, do you have GC (garbage collection) enabled? Enable server GC (link).
Thank you for any additional details, such as a client code example.
Mike Ubezzi (Azure) on Mon, 29 Jul 2019 21:46:42
Want to follow up on this issue to see if you were able to get it resolved. I believe if GC is enabled server-side, this should address the issue but want to follow to see if you found a resolution.