Hi @ismcagdas, I also hope that Microsoft will fix that quickly but the issue is opened since 2016... I'm not confident for a fix until some months ! We are still getting big waiting times (sometimes more than 20s) because of this. It would be welcome to have a work arround for the next version 11.1 as you plan it on the milestone https://github.com/aspnetzero/aspnet-zero-core/milestone/93
Finally, we decided to implement ThreadPool.SetMinThreads on staging slot. The first results were very good ! It is now in production and we improved by 2 request performance. It is still under monitoring for the next days.
Our finding is that a sync method that is in an hot path (like ValidateToken) can turn very bad on azure since the allocation of threads is also taking some time to be done (500ms per allocation). Then, with increasing requests incoming, we had lots of requests queued wainting for available threads.
This has been misleading us for a while because we were just adding more Azure resources ; but this problem is not depending on the server CPU or RAM, it is depending on thread pool behavior. Therefore, the only workarround is to set a minimum thread count and carefuly tune the value (we started to a value of 100 minimun threads).
Hope this can help other abp users.
We are then waiting for an async implementation of JWT token validation by ABP team to avoid definitely these side effects :)
Thank you for your feedback @ismcagdas. I saw that you planned to work on it for v11.1, this is great. While waiting, do you have an idea or workarround to improve this ?
We mitigate this issue and we found some similar issues (request queued) due to azure app service thread pool mechanism. It seems that adding threads to the pool can take 500ms. Thus, some guys are using this code :
ThreadPool.SetMinThreads(50, 50); // worker and IO threads
This allows to define a minimum thread count for the instance... this seems to improve a lot performance but we also found a lot of warnings. We are going to test it anyway in our staging slot but we would like to have your advice about it.
Hi,
Don't know if it is still a problem for you @ashjackson. It worked for me by switching deployment mode to no--self-contained. Your app footprint will be reduced and you will use .NET5 provided by your Azure App Service.
Hi @ismcagdas,
Thank you for your suggestion. I will create an issue on Github.
The problem is happening on a regular basis : many requests are delayed from 10s and more ! When that happens everything seems to be locked. Something else must be wrong in the project. I can't imagine that other abp apps are having the same issue without doing anything.
We are actively working on this issue and analysing memory dumps right now.
Don't you have a clue to guide us ? We need to find a workarround in a very short time.
Tks @ismcagdas, I will do it
The problem should came from AsyncHelper.RunSync. All user requests are blocked when applicaation is in waiting state... maybe when updating cache data. I saw on abp source code that some locks and semaphore are being done.
I should not be the only customer facing that issues I guess.
We are pushing this issue in our TOP priorities ; I will continue my findings with dump files and provide feedback.
Your help will be much appreciated
Yes, this table is cleared every day by a background job. If this is what you want to check I confirm that it is working fine on my side. My concern is more about the design, taking into account that a refreshoken as an expiration time of 1 year, it means that this table can grow in a exponential way
Hi @ismcagdas,
I'm currently trying on my dev machine. Everything works great ! I notice a huge performance improvement between PerRequestRedisCache activated and deactivated. I will deploy it on Azure today and give a feedback if something goes wrong. Thanks for your 5* support @ismcagdas :)