Fixing slow performance with Azure SQL database

Recently I’ve started  using Azure for my applications and their database. One of the first things I encountered was how slow Azure SQL DBs seemed to be, compared with performance on my local machine and on other hosting services I’ve used. A query I was running on a dev laptop, which  isn’t a beast of a machine, would regularly take under a second to complete. The primary table it was querying has about 200,000 records, and there were about 5 or 6 joins.

When the same database was up on Azure, my website kept timing out when it would hit that query. I ran the query through Management Studio and reliably got 42 second response times. This was running on an Azure S0 database instance.

Searching  the web for “slow azure db”, I came across this result:

Seems like I’m not the only one, and that with the change in pricing last year (which predated my Azure experience), the performance of relatively small databases has declined significantly. Most people who posted on the above page  were saying that they  had to upgrade their instances to get (not significantly) better performance. This obviously comes at a cost, but I thought I would try it out.

I upgrade to an S1 instance: the query time was reliably 20 seconds.

I upgraded again to an S2 instance: the query time was reliably 12 seconds.

Clearly the performance was improving, but the cost was too. I couldn’t justify the cost of an S2 database. It’s not exhorbitant, but  it would have been an unpalatable cost to my customers to have to charge them that much, when other hosting options are so much cheaper. And 12 seconds is still nowhere near good enough –  it  needed to be under a second, otherwise my website just wouldn’t be fast  enough.

As a last resort I tried adding indices for  the query. I hadn’t thought of doing this because the query ran so fast on my local machine, and because all the joins were on the foreign  tables’ primary keys, I thought the query was “good enough” and that the difference in performance between 0 seconds and 42 seconds couldn’t possibly be an index issue – surely Azure should give comparable performance to a mid-range laptop or another hosting option?

I was still on the S2 instance when I  tested it with the indices in place. Query time was  0 seconds!

I  decided to downgrade back to the S0 instance. Query time remained 0 seconds!

Fantastic!  If Azure SQL needs the appropriate indices to get it to perform well on the lower instances, I can handle that.

So here’s my thinking of  why  it works like that, although I’m no expert: On my laptop, and on the other hosting provider, if a “big”  query comes along, all the computer’s/server’s resources (RAM/CPU) are used by that process and the query is resolved quickly. That’s fantastic unless  your site is sitting on a shared server where other websites are generating heavy workloads, and your simple queries  get queued  until the resources are available again. I read that  the changes in pricing model on Azure were partly precipitated by complaints about the  predictability of Azure DB performance.

My guess is that this is what  was happening on Azure previously. The change  meant that you are now (with the eDTU pricing model) practically guaranteed  a  level of performance. That’s an up-side and a down-side:  your database is not going to be able to consume huge amounts of resources to process (relatively) expensive queries. The solution  is to make sure your queries are tuned as much as possible; if you have the appropriate indices in place (which you wouldn’t  notice the lack of during development), you should still be able to get decent performance from your Azure SQL database, at an acceptable price point too.



  1. ” because all the joins were on the foreign tables€™ primary keys, I thought the query was €œgood enough€

    so where did you get the performance gain on your indexes?
    can you share your process
    or was it simply a matter of connected to the azure instance from a local machine running the same queries and profiling ?

    1. I believe it’s mostly just having a covering index. There were 6 tables involved but with a covering index that’s effectively like just having one table. The performance tuning also threw up some table statistics adding SQL which also could have helped.

Leave a Reply

Your email address will not be published. Required fields are marked *