NAS vs SAN Storage Devices
They both store data for infrastructure, back in the day there were these servers for data storage and then came the devices to store the data meant for servers.
Network Attached Storage
At the end of the day, it’s an uber file server. It is a remote storage volume that acts similar to a locally attached disk however it is done with standard file transfer protocols. I suppose you could say it is similar to a lettered network mapped share drive on a server. Fortunately, there are a variety of new technologies that can be used such as an iSCSI that allows storage device connection using TCP/IP. Although the end result is pretty much the same, the attached volume appears as a locally attached file storage device, however there are some nicer things such as increased performance due to fewer layers of abstraction. However, I’ll talk a bit more about iSCSI a little bit later and how I was actually able to use it along with Veeam regarding VM backups on our server at work. Essentially optimized for storing and transmitting data.
Storage Area Network
Although it might seem similar due to the flip in letters, similar to many of Linux’s application distinctions between open source and paid software, SANs are not just a flip and opposite free option to a NAS. This time around, it’s a network of boxes to store data, so the user connects to numerous SAN devices and thus allows for data replication and a cluster of storage devices can persist with redundancy! You go to the SAN, not each individual device so reliability and redundancy can be a great choice. This can be good because you can map drives to the SAN and have that folder or drive seem like it is a local network drive with iSCSI as well! Although the lines are being increasingly blurred on the differentiation, the best way for me to explain is that a SAN provides block storage while the NAS provided file storage. Because of the way a SAN is configured it is also a lot easier to add and remove these storage devices because it can be done rather seamlessly because information is distributed more evenly instead of relying on a single file server device.
SAN appears as a local disk you can format, while a NAS appears as a file server. Basically, file storage is a convenient abstraction built on top of block storage. Block storage provides the ability to remember units of data referred to as blocks, indexed by some kind of numerical address. File storage allows you to remember a more logical unit called a file, which has a human-readable name, notions of ownership and access control and so on. It also knows what adresses in the underlying block store contain the data in the file so that the file can be used or edited.
In my recent post about virtualization, I mentioned how it is possible for instances to migrate in case of physical hardware failure. Surprisingly though, I received a few comments and emails asking how that could work because these are logical instances so a couple of people were confused on how a logical instance, in the sense of physical hardware failure, could migrate to working servers. So just thinking about why I’m including this explanation on here, can be attributed to a SAN! Most of the time, the hypervisors themselves are connected to the SAN itself, so these instances are actually stored on the SAN! So all of the data that is being used, processed, and read on each of the instances are actually being pulled from the SAN onto the physical hardware and hypervisor. So because the data is being stored on the SAN, but run on the hypervisor, in cases of physical hardware failure and the hypervisor is down, the data is still available on the SAN and the management console will automatically (configured of course) will spin up the necessary VM instances that were run before across the physical servers that have the necessary resources and keep on working with relatively little to no downtime distributed across the working servers!
The iSCSI is the storage networking protocol that can be used to ensure quick storage networking speeds! Creation of the iSCSI initiator using the home client allows a connection to the iSCSI target, which will be the NAS/SAN device! Very similar to a fiber channel (fiber connection over wire, quick speeds on the network data speed side), iSCSI is essentially the poor man’s version and allows the initiator (home client) to connect to the target (storage device). Some of the pros of using iSCSI is that you can run applications that require local storage on the iSCSI data storage volume instead, which allows flexibility and availability of work processes as needed but necessarily using the data and storage on a local storage drive. One thing to remember when I was configuring the iSCSI initiator to the target is that there can only be one initiator connected to the target at a time. This is done to prevent data loss or corruption when multiple people are doing I/O requests to the data and it ensures that it is consistent and available as needed. Here is a neat little infographic about RAID levels for reference as well! When the storage device is configured and formatted it is important to allocate the storage space as necessary for iSCSI purposes. In some cases, allocate half of the drive for the data share and leave the other half unused, this will be the target for the iSCSI target and serve as the extension of the initiator’s PC hard drive. After that has been done, it’s important to access the target through the client’s initiator and to then format the target like any other local drive, as a result it will specify a volume size, drive letter, file system, and volume label that allows the users to transfer files and run programs from the storage device! Infographic Source