Yeah I know, another AZ-204 Exam blog. My apologies. Some people say I apologise too much, so apologies for that also.
I think I mainly started this post for my own usage, but nevertheless, I hope these notes can also help other people.
App service plans
F1/D1 | Free | Free for dev purposes, can’t scale |
B1 | Basic | Cheap, custom domain SSL, manual scaling |
S1 | Standard | Production worthy, Autoscale, Staging slots, Traffic mgt |
P1V2 / V3 | Premium | Same as S1 but more traffic, more staging slots, more space |
l1V2 | Isolated | Dedicated hardware, ISO network, Private app access |
Basic Azure CLI commands
Create a resource group
az group create --location
--name
[--managed-by]
[--tags]
Create a webapp
az webapp create --name
--plan
--resource-group
Create a webapp from container
az webapp deployment container config --enable-cd {false, true}
[--ids]
[--name]
[--resource-group]
[--slot]
App service general settings

Azure Functions plans


Azure Storage options

az storage account create \
--name <storage-account> \
--resource-group <resource-group> \
--location <location> \
--sku Standard_ZRS \
--encryption-services blob
# Approach 1: Create a container
az storage container create \
--name $containerName \
--account-name $storageAccount \
--auth-mode login




azcopy login
azcopy copy 'C:\myDirectory\myTextFile.txt' 'https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt'

Cosmos DB

Choosing a partition key
– Value that does not change over time
– must be as unique as possible
– use random suffixes
az cosmosdb create \
--resource-group $resourceGroupName \
--name $accountName \
--locations regionName=$location
Strong consistency
Strong consistency offers a linearizability guarantee. Linearizability refers to serving requests concurrently. The reads are guaranteed to return the most recent committed version of an item. A client never sees an uncommitted or partial write. Users are always guaranteed to read the latest committed write.
Bounded staleness consistency
In bounded staleness consistency, the reads are guaranteed to honor the consistent-prefix guarantee. The reads might lag behind writes by at most “K” versions (that is, “updates”) of an item or by “T” time interval, whichever is reached first. In other words, when you choose bounded staleness, the “staleness” can be configured in two ways:
- The number of versions (K) of the item
- The time interval (T) reads might lag behind the writes
Session consistency
In session consistency, within a single client session reads are guaranteed to honour the consistent-prefix, monotonic reads, monotonic writes, read-your-writes, and write-follows-reads guarantees. This assumes a single “writer” session or sharing the session token for multiple writers.
Consistent prefix consistency
In consistent prefix option, updates that are returned contain some prefix of all the updates, with no gaps. Consistent prefix consistency level guarantees that read never see out-of-order writes.
If writes were performed in the order A, B, C
, then a client sees either A
, A,B
, or A,B,C
, but never out-of-order permutations like A,C
or B,A,C
. Consistent Prefix provides write latencies, availability, and read throughput comparable to that of eventual consistency, but also provides the order guarantees that suit the needs of scenarios where the order is important.
Eventual consistency
In eventual consistency, there’s no ordering guarantee for reads. In the absence of any further writes, the replicas eventually converge.
Event Grid
Event Grid is a highly scalable, serverless event broker that you can use to integrate applications using events. Events are delivered by Event Grid to subscriber destinations such as applications, Azure services, or any endpoint to which Event Grid has network access. The source of those events can be other applications, SaaS services and Azure services.
The maximum allowed size for an event is 1 MB. Events over 64 KB are charged in 64-KB increments. For the properties that are sent in an event, see CloudEvents schema.
az eventgrid event-subscription create \
--source-resource-id $topicid \
--name <event_subscription_name> \
--endpoint <endpoint_URL> \
--deadletter-endpoint $storageid/blobServices/default/containers/$containername
When creating an Event Grid subscription, you can set values for how long Event Grid should try to deliver the event. By default, Event Grid tries for 24 hours (1440 minutes), or 30 times. You can set either of these values for your event grid subscription. The value for event time-to-live must be an integer from 1 to 1440. The value for max retries must be an integer from 1 to 30.
Comparison of services
Service | Purpose | Type | When to use |
---|---|---|---|
Event Grid | Reactive programming | Event distribution (discrete) | React to status changes |
Event Hubs | Big data pipeline | Event streaming (series) | Telemetry and distributed data streaming |
Service Bus | High-value enterprise messaging | Message | Order processing and financial transactions |
When creating an event subscription, you have three options for filtering:
Event types
Subject begins with or ends with
Advanced fields and operators
Azure Event Hub

Azure Storage Capture
Azure Event Hubs enables you to automatically capture the streaming data in Event Hubs in an Azure Blob storage or Azure Data Lake Storage account of your choice, with the added flexibility of specifying a time or size interval. Setting up Capture is fast, there are no administrative costs to run it, and it scales automatically with Event Hubs throughput units in the standard tier or processing units in the premium tier.
Azure Service Bus

Some common messaging scenarios are:
Messaging. Transfer business data, such as sales or purchase orders, journals, or inventory movements.
Decouple applications. Improve reliability and scalability of applications and services. Client and service don’t have to be online at the same time.
Topics and subscriptions. Enable 1:n relationships between publishers and subscribers.
Message sessions. Implement workflows that require message ordering or message deferral.