Skip to playerSkip to main contentSkip to footer
  • 6/14/2025
Welcome to the SkillTech DP-900 Training Series!
In this lesson, we’ll explore how to manage non-relational data stores using Azure’s powerful tools and services.

From scaling Azure Cosmos DB to configuring consistency levels and monitoring performance, this session gives you practical knowledge aligned with the Microsoft DP-900 exam objectives.

Topics Covered:

Azure Cosmos DB Management Basics
Performance tuning and autoscale options
Consistency levels: Strong, Session, Eventual, and more
Backup, restore, and global distribution
Real-world tips to manage NoSQL data effectively

Whether you're preparing for the DP-900: Azure Data Fundamentals exam or just getting started with cloud databases, this video will strengthen your understanding of NoSQL management in Azure.

Explore Our Other Courses and Additional Resources on: https://skilltech.club/

Category

🤖
Tech
Transcript
00:00In this lecture we are going to understand how we can manage non-relational data stores in Azure and as we know we are going to focus on those two data stores which we have already provision which are Azure Storage Account and Azure Cosmos DB Account.
00:23So I am inside my storage account and this is the same storage account which we have created in the previous lesson. I can see in this left side section I have my data storage options where I have containers, file share, queues and tables.
00:40We will talk about containers, file share and tables in this session. First if I go into tables it's going to show me that this is the same one which is going to allow me to store data in a tabular format but this is not like an RDBMS table structure because we do not have any foreign keys or relationship in that.
00:59If I try to create a new table with let's say the name of the table I am giving something like employee and I can click on ok. It's going to allow me to create a new table here and you can see that there is one URL property associated with this which is going to automatically associate with that API which is connected with this Azure Storage Table.
01:19I can directly use this API and I can associate with this particular table to store data inside that but in my case to deal with this table I am going to click on one option here which is Storage Browser.
01:32This storage browser is an internal utility where I can click on my table section. I can see my employee table is created there and now inside this employee table if I want to add any new entity, I want to edit columns or I want to add on some customized data into this
01:49using this particular properties like partition key, raw key and time step that I can do that. I am going to click on add entity right now and this is going to take me to a page where I can define some kind of properties and values, it's all about key value pair, partition key and raw key will be the default columns for this table, obviously partition key is going
02:10to be useful to make sure that you can have some unique identities with that and raw key is going to help you to define each rows with a unique ID and obviously one partition key can have a bunch of multiple raw keys inside that like let's say in the partition key I am going to give some date like let's say I am giving a date is going to be 01 slash 01 which is 1st of January 2022
02:38and then in the raw key maybe let's say I am going to use some kind of key which is like 101 and I want to add some more properties into this so let's say I also want to store a name of my employee which will be in string and I can give some value like maruti, I want to add one more property which is going to be let's say age and then I can provide some change of value
03:08of the data type like integer 32 and giving an age there and then likewise I can add some more properties and I can click on insert. The moment I do insert this particular entity is showing me an error here that failed to add entity because this error in the partition key because it does not it cannot contain this kind of a range it should be some kind of a unique ID like this.
03:38I need to make sure that I need to make sure that I need to make sure that I need to make sure that it's going to be in the proper format. Now I am just doing 01012022
03:44Later on while reading this data I can just use this thing as a date and yes it is stored inside this one partition key and the record is added. Same way I can add more entities which are nothing but more records which I can do inside this.
03:57And this kind of multiple tables also I can add. If I go to table section I can click on add table and with employee let's say I can add one more table called department and then I can click on ok and same way I can add some more data into this but remember you cannot have a relationship between this department and employee.
04:19This tables are just allowing you to store data in a tabular format in a row column format.
04:26Moving forward to file share. File share is a kind of a storage where I can upload my files and then I can share it with my other computers which are maybe located outside Azure.
04:37I can share it with some applications which are some cloud hosted or online applications where I want to get this data.
04:44I can click on the file share and you can see I can give some kind of a name for the file share. Like let's say this is seminar picks which I have taken in my previous seminar. I am going to give this kind of name.
04:58What kind of tire I want to choose will define what kind of performance and cost will be associated with this file share. If I have a frequent usage and I want to keep the file in the hot tire that's going to give me a good performance.
05:12If I have a less frequent usage of this file then I can go with the cool and if I am not sure I can go with transaction optimize.
05:21The maximum size of this one file share is going to be 5 terabyte and then they are giving me 60 megabits per second kind of transfer rate which is more than enough.
05:32I am going to click on create and the moment I am going to click on create and the moment I do this, this file share is nothing but something like a folder inside which you can have some more directories, some more folders I can create.
05:43Like let's say for my seminar picks I am adding a directory like day 1 then I am adding one more directory which is like maybe day 2 and this kind of device segregations I can do with folders.
05:57Inside each folder I can upload multiple files which can be of any format. So I can go to my pictures, I can upload some pictures from this and then I can just click on open and the moment I upload this, this images will be uploaded into this particular file share and you can see like they are treating this thing like a normal file and the extension of the file is like jpg which they are showing me here.
06:25If I want to share this images with some computers or some kind of applications then on the top of this I have an option called connect.
06:35I can click on connect and at the right side you can see they are showing me that using the SMB 3.0 protocol version this is a secure transfer and below that there are ways to connect this.
06:48If I am having a Windows based, Linux based or Mac based machine then I can use that machine and then in those machines I can connect with this file share and I can get the same kind of files into that.
07:01Let's say if I am choosing Windows right now then all I need to do is I need to go to my machine and where I want to access this file share and then I can just execute this PowerShell script which they are giving me here.
07:13Technically they are just trying to open a port number which is 445 and using that they are just going to add this particular folder as a network drive into that particular computer and that's what they are showing me.
07:27You can try this thing if you have your personal computer but I do not recommend doing this thing in your office laptops.
07:33So if you have a personal computer with Windows, Linux and Mac you can try this.
07:38Otherwise you can create a virtual machine on Azure and you can try this thing with that also.
07:42This file share is just allowing me to share these files and obviously I can keep my tables in the semi-structured data kind of format and my file share is going to be unstructured
07:55because whatever folder structure I am creating that's the only way to organize it.
07:59There is no structure which I need to follow into this one.
08:04And then the next important thing is containers.
08:07As we know containers are allowing me to store data in the form of blob.
08:12This is a storage account container.
08:14Don't get confused in storage account container.
08:16These containers are something like a folder structure which are going to have either flat namespace or hierarchical namespace based on what type of storage account you have created.
08:27And then when you create a container you can see that I have some default security associated with that.
08:33Like I can say my container name is media files or maybe media and then what kind of access level I want.
08:43That is something which I can choose.
08:45I can keep it anonymous like container access is like anonymous read access for the full container.
08:52So after this anyone can access this container in the files which are inside that if they know the exact URL of the container.
08:59Or if I go with the blob then only a file level access will be given not the container level access.
09:06So a person can access only a particular file.
09:08And then if I give private means no anonymous access nobody can access the data which is inside that directly.
09:15They need to have a valid token or they need to have a valid authentication done in order to access the data.
09:21You can get more about this you can know more about this by watching my other videos of Azure fundamentals in AZ-900 or maybe AZ-204 which is for Azure development.
09:33I am covering this thing in depth even in my another video course which is DP-203 which is a more advanced core for data engineers.
09:40So you can go through those videos to get more details about storage accounts and all.
09:44I am creating this new container with the accessible private and then I am going to click on create.
09:50Once this container media is created I can upload some more files into this.
09:56But the thing is this time whatever files I am going to upload inside this even though these are JPG or JPG image files.
10:03This is going to treat this thing like a blob.
10:05You can see this is a blob file.
10:07We are uploading blob.
10:08We are able to see the blob.
10:09And even if I see the type of this blob is showing me that this is a block blob.
10:14In the previous lesson we understood this that we have block blob.
10:18We have append blob.
10:20We have page blob kind of types here.
10:22While uploading any image or any file you can choose this that what kind of a type you want to select for your blob.
10:29And ultimately it's going to store data in the binary large object so that the retrieval of this data is going to be much faster.
10:36In this case this is what we are able to understand from this particular storage account which allows me to store data in the form of blobs, files and table.
10:50Now once you are clear with this let's have a look at our Cosmos DB account.
10:55So now I am inside my Cosmos DB account and you can see that this is my account.
11:02As of now we have created this account with the SQL API and in the left side column the first option which I want you to focus is data explorer.
11:12Same like here in the storage account you have something which is your storage browser.
11:17Here we have data explorer which allows me to explore the data which is there inside this Cosmos DB.
11:23I can see that this is a Cosmos DB account which is following SQL API and now following the same API model I can create more data into this.
11:34On the top I have two options.
11:36I have an option to create a new database and then inside one database I can have multiple containers.
11:43Don't get confused here also we have a word container while in the storage account also we know we have a word container.
11:50That container and this container are totally different but the only common thing is both are having a word container associated with that.
11:59I hope you know that the storage account container is allowing me to store blob data while this container which is inside the Cosmos DB is actually going to allow me to store and hold the JSON documents which are going to be created with this database.
12:16Let's create our first new database in this.
12:19Let's say I am saying that the name of my database is company DB.
12:24This is a database of one of the company maybe.
12:27I am just giving a name company DB and they are asking me what kind of throughput you want to provision for this.
12:34I can choose an auto scale or manual scale as per my choice.
12:38As I already discussed earlier the throughput varies from the range of 400 request unit per second.
12:45Just like per second you can have 400 requests processed into this or it can go up to even millions of requests.
12:52I can put 1 million, 2 million or even 10 million requests into that.
12:58Obviously it's going to increase the cost but the good thing is this kind of performance, this kind of throughput is actually possible in Cosmos DB.
13:06Compared to SQL server this is going to be much much much faster than that.
13:11I am keeping it default which is 400 request unit per second which can cost me around 24 US dollars per month.
13:18I am okay with that for this one database.
13:20And remember throughput is a property which is associated with the database.
13:24You can have another database with a different kind of throughput in that.
13:28You can have n number of database inside one Cosmos DB account.
13:32I am going to click on ok and my new database will be listed here under the data section.
13:38Once you have one database, this database is going to have multiple containers and each container is going to have multiple records in a form of JSON file.
13:49So same like in your SQL server, you have a hierarchy like one SQL server can have multiple database and one database can have multiple tables.
13:57Here one Cosmos DB account can have multiple databases.
14:02One database if you expand, it is going to have multiple containers inside that.
14:07We do not have any container in this database right now.
14:09So I am going to click on these three dots and I am going to say I want to create my first container.
14:14Let's say inside the company DB, the first container is going to store the details of customers of this company.
14:22So I am just giving a name of the container, customers and then what kind of a partition key you want.
14:29Now this partition key is different than the table partition.
14:32This has to be a unique partition key and because I have a customer, let's say I want a customer ID has to be the partition key which is going to be a unique property for each document.
14:43I can add a unique key also.
14:46I can also go for the advanced section where I can say that I want my partition key should be larger than 100 bytes if I have really millions of records inside that.
14:56And additionally, if I am going to focus on data analytics, you can see that we have some Azure Synapse link integration also in a Cosmos DB.
15:05We will discuss about this when we see this Azure Synapse Analytics in the next module but this kind of options can also be enabled with this.
15:14I am not doing enabling of the data store, analytical store right now.
15:17I am going to click on OK and this is going to create my first container inside my company DB.
15:24Let's see if it is created.
15:28Yes, it is created now and I can see that I have a new container.
15:34Now I know I am calling it container because technically the term is container.
15:39Now as you can see my container which is actually having a name customers is having properties like items, settings, store procedures.
15:48And even I can create user defined functions and triggers which can be associated with this particular container.
15:54I am going to click on items first and then you can see at the right side I do not have any items so it is not showing me anything.
16:01But I have an option on the top to create a new item.
16:04When I click on new item, this is nothing but one single record which is going to be stored inside this container as a JSON document.
16:11This can have a properly formatted JSON data and that's why we can keep this thing in a semi-structured data.
16:18I can put some ID here which has to be unique.
16:21Let's say I am putting the ID of this particular document will be some number which has to be unique.
16:28And after comma, I can put my kind of key value pairs into this which will be followed by the schema of this document.
16:37Let's say I am providing, because it is a table for customers, I am giving something like cust underscore ID.
16:45That will be the key.
16:47And I think the ID is already there which is for the uniqueness of the document.
16:52But with that I want a customer ID which has to be something like a unique number again.
16:58But this is going to be something which is in sequence.
17:01Then I am providing some more details like customer name.
17:07And then the customer name is actually going to have some value inside that.
17:13Then I am going to provide maybe let's say customer address.
17:23So I am giving cust underscore address.
17:27And then we can provide some values into that also.
17:30And this address can be anything, street, area, pin code, whatever is inside that.
17:41Once I have this kind of key value pair, this is going to be twisted like one record or I can say one document.
17:47And the moment if I click on save, just observe this, the moment if I click on save, this is going to process this and it's going to save this thing as one single record.
17:56You can see it's my Cosmos DB and that's why this is my unique ID which is associated here as a partition key also.
18:04And then my other details like customer ID name and address which I have stored here.
18:09With those key value pairs, they have already generated some other key value pairs.
18:13This properties like RID, Cells, ETag will be used by Cosmos DB internally.
18:19Because technically this is not a relational database, it's a non-relational data.
18:23But still when I want to relate this customer data with some other containers which I am going to create,
18:29then logically I can do that thing and I can also write queries and stored procedures on top of that.
18:35This is something which is the magic of Cosmos DB and that's why Microsoft call this cloud database as a multi-model database
18:42with query, stored procedures and integrations inside that.
18:46This is one of the advantage of Cosmos DB.
18:48I can same way add more records into this and every record will be treated as a separate separate JSON file document with a unique partition key ID.
18:58Once you understand the data store and structure of this Cosmos DB,
19:01the next thing which you need to understand is, you need to associate this one with the proper API and code.
19:08Like you can see, I have a quick start section where I can create a new container called item
19:14and then they will allow me to download a ready-made sample on .NET framework based application
19:19which will have a code to do operations like insert, update, delete and select kind of things for the items container which is going to be created here.
19:30This kind of quick start samples you can find in different programming languages here.
19:34Now whichever programming language you are comfortable, you can choose that and then you can download that sample to understand it properly.
19:41Last but important thing, inside each Cosmos DB you have a consistency configuration.
19:49Like you can see, I have a section here called default consistency.
19:53Now obviously, as Cosmos DB allows you to do multiple region write and multiple region read associated with that,
20:00you can see they are showing me that we have five different options here for consistency.
20:06I can have strong, bounded state, session, consistent and eventual consistency and this five different options which I can select in consistency will eventually give me an option that what kind of consistency I will get in my data.
20:24I have a primary location which is East US and I also have some other locations like West Central US, North Central US which are taken here as a scenario.
20:34And now what kind of latency I will have, whether it's going to be real time update or not, that is something which depends on my configuration which I do here.
20:42For example, just to understand this, if I go to eventual, the eventual consistency shows me that if you have done some write operation in East US,
20:52maybe there are chances that when you do read operations on that, it will not be happening at the same time, it will happen after some delay.
21:00And same way if you try some other operations like insert, update and delete kind of operations on that, it can maybe vary that it will come after in some delay.
21:10So this option is actually something which is least consistent option in this and it's based on the situation which is just described here,
21:19that changes won't be maybe lost but they will appear eventually.
21:24So it's like there are chances that it is not very real time, it will appear eventually and that's why this consistency is known as eventual.
21:31Suppose if I go with strong consistency, which is like real time, everything is going to happen at that same moment.
21:37And strong is actually something like all the writes are only visible to the clients after the changes are confirmed,
21:45as return successfully to all replicas. This option is something which, you know, allows you to distribute your data across multiple global regions
21:55and everything is happening at the same time in all the regions. And obviously same way you can go with some other options like bounded stateless or even session based thing.
22:04where when you go with session, suppose if you have an application which makes a number of changes and they all will be visible to all the applications in order.
22:16Then other applicants may see your old data, although any changes will appear in any order as they did for the consistent prefix.
22:24Now this form of consistency is sometimes known as read your own writes. It's like whatever write operations you are doing, you can read that thing immediately.
22:33But if some other people are doing that thing, then maybe they will get that read operation after that, the delay.
22:40So that's like per session kind of management will be done for each client who's actually connected with the database.
22:46Which consistency option is perfect for you? Well, you need to do research on your application and the requirement of the data processing with that.
22:54And according to that only you will get to know. And it also depends on what kind of architecture you're going to follow.
22:59I strongly recommend that this is one of the topic which you should read more about this in depth from the document which I'm going to share it with you at the end of this course which are officially given from Microsoft.
23:12So just try to focus more on this kind of concept so that you can understand which consistency will be more important.
23:18Especially if you're focusing for certification and the exam point of view, I'm damn sure that you will have some questions based on the Cosmos DB consistency.
23:28So in this lesson we have understood that how non-industrial data stores are going to be operated in Azure portal.
23:37We have gone through our Azure storage account and Cosmos DB account which we have created and we have stored data inside that in various formats.
23:45We'll see you next time.
23:46Thanks.
23:47Thanks.
23:48Thanks.
23:49Thanks.
23:50Thanks.
23:51Thanks.
23:52Thanks.
23:53Thanks.
23:54Thanks.

Recommended