Raid 0 strips the data into multiple available drives equally giving a very high read and write performance but offering no fault tolerance or redundancy. Unlike Raid 5, if you loose one of disks in the array your data will be corrupted. Minimum 2 disks are required to configure a Raid 0 array.

On my server actually I have 4 additional disks:

hostname: scsi_disk_server

OS: Centos 7

Let’s check our raid disk /dev/md0

Now I’ve 3 choices; use this array on that server by creating partitions or not or export this array to another server over ISCSI and create partitions on that remote server or use whole disk after creating a FS.

While exporting the storage over ISCSI, I can either format it on my scsi_disk_server host and create FS then export separately these FS to remote machine or export directly the storage as a block device then do whatever I want on remote server.

On my storage host, I’ll format whole disk. Let’s assume, we are using this server as a general storage host so we’ll do everything about storage management on this server.

Now first step that we’ll do is to use disk FS in our storage server. Then I’ll show how to export this FS to a remote machine over ISCSI. I’ll create a FS and use whole disk with a mount point. Alternatively you may want to include this md0 raid array into LVM; create PV, combine them into a VG and create multiple partitions/filesystemts like /var,/home,/backup_data etc… To do so, you must follow traditional lvm procedure starting by pvcreate /dev/md0 command.

/proc/mdstat, shows us that we have a raid0 constructed with 2 disks; sdb and sdc.

ATTENTION: If you want to install operating system on raid and boot from raid array, you must create partitions by choosing “0xFD – Linux raid autodetect” and mark your MBR partition with “bootable flag”.

On next tutorial below, we will see how to export raid array to remote server over ISCSI.

So, following tutorial is optional.

Hostname: najerilla

OS: Centos 6

But firstly, we need to configure ISCSI on target server which means my scsi_disk_server (Server that we configured raid 0) which is the target host under client server’s point of view.

Install packages ;

On target server, owner of the disk: yum install scsi-target-utils

Systemctl start target

Start targetcli then go into /backstores/block (where our disks to share will be shown) now we have no lun created, check with ls command, you will see nothing.

We created our raid 0 array /dev/md0

So, let’s use this disk as block storage object. Then we will generate an iqn name automatically by typing only create command. Alternatively you can write your own iqn address by respecting the correct syntax.

Now we need to create an ACL so only our iscsi initiatior on client side can access the disk.

We will create a LUN from the disk that we’ve used to create a block storage object.

The final step is to create a portal; we will give our target server’s (current machine) ip address in order to publish the LUN over network.

Remember if you have a default portal like I have, check with ls command you must firstly delete it before creating a new portal.

We are ready to configure iscsi initiator on client side.

Yum install iscsi-initiator-utils

Replace the initiator name allowed for client:

Copy to Clipboard

Restart iscsi service, systemctl restart iscsid iscsi

If service is not started by systemctl or service commands try to discover the target by iscsiadm command, it will be started automatically.

Copy to Clipboard

Now, you use traditional ways to create a PV then allocate your space to other volume groups. I’ll create one VG but 2 different LV with 1GB each.

Copy to Clipboard

Format the FS with ext4 or xfs as tou wish.

mkfs.ext4 /dev/mapper/raid0_vg-data1

mkfs.ext4 /dev/mapper/raid0_vg-data2

create mount points as you wish. I’ll create data1 and data2.

Copy to Clipboard

Your FS with raid level 0 is ready to use. The logic remains same for other raid levels.