Watching the cricket today, I was wondering what a graph of the different teams’ runs per over would be, over the past decade or so. The English team seems to bat at close to 4 an over lately, while the South African team seems happy at a more ‘traditional’ rate of around 3 per over. Maybe because the English have played the Australians in the Ashes so much, they have started to emulate their run scoring pace – that was my hypothesis. It was the Aussies, after all, that really ramped things up 10-15 years ago, when they dominated test cricket for a decade.

I wanted to check my theories against the data, so I downloaded the test data from Cricinfo StatsGuru. Checking the team scoring rates (RPO) year-by-year had too much variance, so I adjusted the graph to use the run rate from the 5 previous years. This is what you get:

It’s quite impressive to see how the Australians broke the mold. South Africa is pretty consistent, but England aren’t that much faster than the Proteas as I thought they’d be, at least not over the last 5 years. New Zealand’s black caps seem to be cranking it up, though. The Zimbabweans haven’t yet reached the levels most teams were at in 1999 (their hiatus shows in the 5-year average as a gap in 2010).

Using the run rate from the previous 10 years, the graph smooths out a bit:

You can just see the passing of the baton from the West Indies team to the Aussies – it would be interesting to run this even further back than 1990, which was as far back as I went. One day, when I’m big, I’ll have to redo it with all test data, and make it an interactive webpage. For now, the above are images from an Excel spreadsheet…

But it made me wonder: RPO shows how fast the team bats, but what would the batting average (per wicket) look like?

In the 10 years up to 2009, an Australian wicket scored an average of roughly 10 runs more than a wicket from all the other teams! Nice to see the South Africans getting to the top, although as the 5-year average shows (below), it looks like we’re on a bit of a decline.

Nice to see how Bangladesh is now competing with the older test playing nations. Are Aus really still top over the last 5 years? Maybe their bowling lets them down, as a team. I suppose the same exercise needs to be done for the bowling team… not tonight, though.

 

Lately I’ve been building websites with AngularJS, TypeScript, WebApi and Entity Framework 6 (and Bootstrap, obvs!). A lot of the work is repetitive grunt-work, generating the model, then the DTO, then the WebApi controller, etc. For a little while now I’ve been using my own “code generator” tool to scaffold out these “templates”, which saves me a lot of work. Recently I open-sourced the project on GitHub, and now I’ve got a chance to blog about it.

I’ve also set up a hosted version, which you can play with now, to see how it works. Simply head over to http://codegenerator.sitedemo.co.za/ and log in with the following credentials:

  • Username: demo@capesean.co.za
  • Password: L3tM3!n

Here follows an explanation of how to use it, and what it does.

Projects

When you first log in, you will see a list of projects. Click the Demo Project for now. You will see a page like the following:

You can see there are 4 entities: Product, Customer, Order and Line Item (we all know where this is going, right?)

You’ll see 13 columns in the table: Model, Enums, DTO, SettingsDTO, etc. These are the outputs that will be generated by the tool. More on these later.

Entities

Clicking on the Customer entity link, takes us to the following webpage:

Some entity-level fields are hidden under that More > button, but we’ll ignore those fields for now. More importantly you see the 3 fields on the entity: CustomerIdName, and Telephone. You can add more fields, rearrange the order, or click on a field row to edit that field.

Beneath the fields are the relationships with other entities: those relationships where the Customer entity is the parent, and then where it is the Child. Customer is a parent of Orders, so you can see the relationship to the Orders entity listed.

Beneath that you’ll find Code Replacements. Code Replacements allow you to customize the generated outputs. More on that in another post, though.

Fields

For now, let’s click the Name field. This brings up the following page:

 

Here you can see some standard things you’d need for a field: a field name, a label, the data type (e.g. nVarchar), the Length, whether it’s a key field, if it’s unique, if it’s nullable, whether it’s a search field, whether it should be shown on the search results page, a sort order, etc. So enough to get you started.

The Code!

Back on the entity page, if you click the blue Code </> button, you’ll get to this page:

Here you’ll find 13 checkboxes and 13 tabs: one for each of the output files that the Code Generator tool produces. (The checkboxes are for deploying the outputs directly to your local folder, if/when the tool is installed on your development machine – again, more on that in a later post).

The Model Code

For now, you’ll see that the tool is generating a Model file in the screenshot above. It’s outputting the key field, CustomerId, which is a Guid. The Name field is a 250-length string, with an index for uniqueness. There’s also a navigation property to the Orders collection, which is produced because of the relationship defined from Customers to Orders.

The WebApi Controller Code

Let’s look at the WebApi controller code next:

So the API is protected with Authorize(Roles = “Administrator”), and the route prefix is api/customers.

There is a Search end-point which takes an optional paging object, for paging through results, and then a string search parameter q. If this is supplied, the controller will search in the Customer.Name field for any matches, because the Name field was defined as a text-search field. The name field was also designated as a sorting field, so the results are sorted by the Name. The controller then gets a paginated response object, and converts the model to the DTO using the ModelFactory.Create method.

Further down you’ll see a Get method, for returning a single item. Then an Insert and Update method, which both use a private Save method, and lastly a Delete method.

The AngularJS (TypeScript) Code

Ok, let’s look at what it does from the AngularJS / TypeScript side. Here is the output for the AngularJS controller, for the Edit page:

Ok, so this is a standard AngularJS controller, with several items injected. Let’s look at what it does.

There’s an initPage function which runs when the controller loads. It determines via the $stateParams if the entity is being added (new) or loaded (existing). If it’s being loaded, it uses the customerResource (ngResource) to .get the appropriate record. That’s really it, in a nutshell.

Then there’s a save function, which saves changes up to the API.

Then there’s a delete function, which will delete the entity.

And lastly there’s a loadOrders function, which will load the customer’s orders, using the pagination parameters that were mentioned briefly in the Controller section above, so it will display 10 orders at a time with a pager to move through them.

The Html Code

Lastly, let’s look at an Html page that gets output.

Here’s the Html that works with the AngularJS controller. I’m not going to go into detail, but you’ll see it uses Bootstrap 4 (although there’s a setting for 3 on the Project), it does a bit of validation and uses ng-messages, has the Save and Delete buttons, and then displays the customer’s orders in a list at the bottom (if the customer record is not new).

Hopefully that explains it enough to show you what it can do. Obviously you’ll need to have a project with the appropriate supporting files (e.g. the BaseApiController.cs, the WebApiConfig.cs, the ApplicationDBContext.cs, etc). However, the CodeGenerator project on GitHub has all these files in it! So you can simply strip out the files related to the CodeGenerator, and paste in the files from your project, and you should be good to go. (If you’re struggling with getting that set up, I can/will provide an ’empty’ project that you can start with, if it would help.)

Let me know if you find it useful, and if you hit any issues. Hopefully this will help someone’s productivity as much as it’s helped mine!

Recently I’ve started  using Azure for my applications and their database. One of the first things I encountered was how slow Azure SQL DBs seemed to be, compared with performance on my local machine and on other hosting services I’ve used. A query I was running on a dev laptop, which  isn’t a beast of a machine, would regularly take under a second to complete. The primary table it was querying has about 200,000 records, and there were about 5 or 6 joins.

When the same database was up on Azure, my website kept timing out when it would hit that query. I ran the query through Management Studio and reliably got 42 second response times. This was running on an Azure S0 database instance.

Searching  the web for “slow azure db”, I came across this result:
https://feedback.azure.com/forums/217321-sql-database/suggestions/6848339-please-reconsider-the-new-db-pricing-tiers

Seems like I’m not the only one, and that with the change in pricing last year (which predated my Azure experience), the performance of relatively small databases has declined significantly. Most people who posted on the above page  were saying that they  had to upgrade their instances to get (not significantly) better performance. This obviously comes at a cost, but I thought I would try it out.

I upgrade to an S1 instance: the query time was reliably 20 seconds.

I upgraded again to an S2 instance: the query time was reliably 12 seconds.

Clearly the performance was improving, but the cost was too. I couldn’t justify the cost of an S2 database. It’s not exhorbitant, but  it would have been an unpalatable cost to my customers to have to charge them that much, when other hosting options are so much cheaper. And 12 seconds is still nowhere near good enough –  it  needed to be under a second, otherwise my website just wouldn’t be fast  enough.

As a last resort I tried adding indices for  the query. I hadn’t thought of doing this because the query ran so fast on my local machine, and because all the joins were on the foreign  tables’ primary keys, I thought the query was “good enough” and that the difference in performance between 0 seconds and 42 seconds couldn’t possibly be an index issue – surely Azure should give comparable performance to a mid-range laptop or another hosting option?

I was still on the S2 instance when I  tested it with the indices in place. Query time was  0 seconds!

I  decided to downgrade back to the S0 instance. Query time remained 0 seconds!

Fantastic!  If Azure SQL needs the appropriate indices to get it to perform well on the lower instances, I can handle that.

So here’s my thinking of  why  it works like that, although I’m no expert: On my laptop, and on the other hosting provider, if a “big”  query comes along, all the computer’s/server’s resources (RAM/CPU) are used by that process and the query is resolved quickly. That’s fantastic unless  your site is sitting on a shared server where other websites are generating heavy workloads, and your simple queries  get queued  until the resources are available again. I read that  the changes in pricing model on Azure were partly precipitated by complaints about the  predictability of Azure DB performance.

My guess is that this is what  was happening on Azure previously. The change  meant that you are now (with the eDTU pricing model) practically guaranteed  a  level of performance. That’s an up-side and a down-side:  your database is not going to be able to consume huge amounts of resources to process (relatively) expensive queries. The solution  is to make sure your queries are tuned as much as possible; if you have the appropriate indices in place (which you wouldn’t  notice the lack of during development), you should still be able to get decent performance from your Azure SQL database, at an acceptable price point too.