Category: azure analysis services
Amit Rohilla on Tue, 10 Oct 2017 21:02:38
I am planning to create a Web API on top of Azure Analysis Services which will be consumed from a Web Site. As the Web Site will be used by 100s-1000s of people at the same time, and hence there will be the same amount of queries running on AAS via the Web API.
Do we have any limitations, guidelines on how many connections and how many queries and what tier of AAS works well and when to scale our query processing etc.?
With Regards, Amit
BillAnton on Tue, 31 Oct 2017 20:21:15
yes and no. here are a few points to consider...
- yes, there are "guidelines" to help determine the SSAS instance sizing (Tabular Hardware Sizing Guide), but it was written for on-prem SSAS tabular and mainly focuses on the amount of memory needed (as opposed to the amount of CPU needed - which is arguably the harder of the 2 to determine).
- Azure AS instances use QPU (query processing units) which can be roughly translated to CPU using the following formula... 100 QPU's is roughly equivalent to "5 pretty fast CPU cores"...
- In general, CPU is the resource that dictates the level of concurrency... but the reality is that it depends on the complexity of the model and/or calculations referenced in each query. For example, if you have a very simple model with basic sum-aggregations
with an avg query response time of sub-100ms... then you will be able to support far more "concurrent users" on the same Azure AS instance as opposed to a complex model w/ with complex calculations and avg query response time of > 10 seconds.
- there is no limit on the number of user-connections to a SSAS database. However, if there aren't any resources (i.e. CPU cores/threads) available then those users, despite being connected to the database, will NOT be able to execute any queries... technically they can submit queries, but those queries will queue up waiting on resources... key takeway, performance will be bad.
In your case, where you are very likely to require dynamic scale-out (which is the correct approach for high-concurrency scenarios) then you will definitely need the Standard SKU (e.g. S1, S2, etc). Next, you'll need to pick a tier with enough memory to support your data model (see: how much ram do i need for my tabular server).
What you end up with is the cheapest SKU needed to support your model and functionality. Let's say this is an S2 instance. An S2 has 200 QPUs (or about 10 pretty fast CPU cores)... how many concurrent users can this instance handle? To find this number, you will need to load-test. Let's say it can support 40 concurrent users based on load-testing. With dynamic scale-out, you can have up to 7 replicas. 40 users/replica * 7 replicas = 280 users. If you expect more than 280 users, then you'll need to bump up to a higher level with more QPUs per replica.
Data Enthusiast | @SQLbyoBI | www.opifexsolutions.com