What is raid 1 0. RAID array: see the creation process

1+0 (also called RAID 10)- an array of independent disks, similar to RAID 01, with less difference than the equal ones that are used in this system, reverse and with a few mirrors.

The disks of the embedded array are paired in “mirror” RAID 1. Then these mirror pairs are transformed into the secondary array, vikory and mirror RAID 0.

Updated data A disk with a RAID 1 array can suffer damage without wasting any data. However, the downside of the system is that

damaged wheels

Indispensable, and whenever there is a breakdown in the robotic system, the customer will interfere with the abuse of system resources that are lost.

A RAID 10 system is called a special “hot spare” disk, which automatically replaces the disk that is in harmony in the array.

  • Productivity and fluidity
  • Depending on the configuration of the devices and the specifications of the devices, in most cases RAID 10 offers higher throughput and less recovery time than the RAID level, behind RAID 0 (the highest throughput date).

This is one of the greatest priorities for the work of “important” additives that contribute to the high efficiency of the system.

RAID 10 allows you to add more than a few disks.

Minimum number of disks – 4, The maximum number of disks is 16. What is the difference between RAIN 1+0 and RAID 0+1?

The key difference between hybrids RAID 0+1 and RAID 1+0 lies in the location of the skin RAID system: RAID 0+1 is a mirror system, where two RAID 0 are combined into RAID 1, while RAID 1+0 is combined into one two RAID 1, united in RAID 0. “Rings”, visually, RAID 0+1 is the same RAID 10.

Some generators will use RAID 1+0 instead of RAID 0+1, which will provide more correct data

I'm safe for the robot

system.

Theoretically, RAID 0+1 and RAID 1+0 offer equal resistance to failures and failures.

Most controllers do not have such significant indicators of reliability.

  • System advantages
  • "Mirror" RAID 1 will ensure system reliability, while RAID 0 will increase productivity.
  • Shortcomings of the system
  • The disadvantages of the RAID 10 level are the same as those of the RAID 0 level. It is recommended that you include hot spare disks in the array at a rate of 1 reserve per 5 workers.
  • Butt robot RAID 1+0:
  • Disks 11+12 = RAID 1 (Mirror set F)
  • 
 Disks 13+14 = RAID 1 (Mirror set G)

  • Disks 15+16 = RAID 1 (Mirror set H)

  • Disks 17+18 = RAID 1 (Mirror set I)
  • 
 Disks 19+20 = RAID 1 (Mirror set J)

In this case, we can distribute RAID 0 across all networks: 3 A to J. For example, if there is a defect in disk 5, a single mirrored set will be network C. It also contains disk 6 at the link, and this disk do not compromise your functioning and practice further.

The problem of increasing the reliability of saving information and instantly increasing the productivity of the data saving system has been occupying the minds of computer peripheral developers for a long time. By increasing the reliability of saving, everything has become clear: information is not a commodity, and often even more valuable. To protect yourself from wasting data, many ways have been invented, the most knowledgeable and reliable of which is

backup copy information. The power supply boost to the productivity of the disk subsystem is even more complex.

Since the publication of this article, already 15 years have passed, but the technology of RAID arrays has not lost its relevance today.

The only thing that has changed since that hour is the decoding of the RAID abbreviation. On the right, the first RAID arrays were not on cheap disks, so the word Inexpensive (inexpensive) was changed to Independent (independent), which was more effective. Moreover, RAID technology itself has become increasingly widespread. Thus, since many times RAID arrays were used in expensive enterprise-scale servers due to the storage of SCSI disks, today they have become a de facto standard for servers cob vine

.

In addition, the market for IDE RAID controllers is gradually expanding, so installing RAID arrays on workstations using cheap IDE disks becomes relevant.

So, the deyak virobniki motherboards

(Abit, Gigabyte) have already begun to integrate IDE RAID controllers on the boards themselves. Increased productivity of the disk subsystem is ensured by the one-hour operation of several disks, and there are more disks in the array (up to the last limit), in short. Sleeping for the robot Disks in an array can be organized using either parallel or independent access. With parallel access, the disk space is divided into blocks (blocks) for recording data.

In the same way as the information that is written to the disk, the blocks themselves are broken up.

The situation is clear if the size of the adjacent block is 8 KB, and the size of the data record is 64 KB. And here the output information is cut into blocks of 8 KB each. If you have an array of four disks, you can write as many blocks, up to 32 KB, at a time.

Obviously, in the examined application, the speed of recording and the speed of reading will be four times higher than with the vicor of one disk. However, this situation is ideal, since the size of the disk will not always be a multiple of the size of the block and the number of disks in the array. If the size of the recorded data is less than the size of the block, then a fundamentally different access model is implemented - independent access.

Moreover, this model can be implemented in the case where the data size is larger than the size of one block.

With independent access, all data

I'll wash it down

are recorded on the same disk, so the situation is identical to work with one disk.

RAID level 0, strictly speaking, is not a massive array and does not ensure reliable data storage.

This principle requires a wide range of applications if it is necessary to ensure high productivity of the disk subsystem. This is especially popular at work stations. When a RAID array of level 0 is created, the information is divided into blocks and written to the adjacent disks (Fig. 4), thereby creating a system with parallel access (as, of course, the size of the block does not allow).

The ability to simultaneously insert and withdraw from multiple RAID 0 disks will ensure maximum speed of data transfer and maximum efficiency of disk space, as there is no need for space to save checksums.

The implementation of this level is even simpler.

In general, RAID 0 is used in those areas where the transmission of large amounts of data is required.

Naturally, when calculating using the above formula, L is rounded up to the nearest whole number.

However, without regard to the formulas, you can use another mnemonic rule: the size of the control word is determined by the number of digits necessary for a double indication of the size of the word.

If, for example, the size of a word is ancient (the double entry has 100), then in order to write this number in double form, three digits are required, so the size of the control word is three.

However, since there are several disks for storing data, you need three more disks for storing control data.

Similarly, since there are seven disks for data (for a dual record 111), three disks are needed to save control data.

If all disks are entered (for dual recording 1000), you also need disks for control information.

The Hamming code, which forms a control word, is based on a random bitwise operation “that turns on ABO” (XOR) (also known as “ambiguity”).

It is clear that the logical XOR operation produces one when the operands are different (0 and 1) and zero when they are separated (0 and 0 or 1 and 1).

RAID 2 is one of the few rivals that allows you not only to manage single errors “on the spot”, but also to detect secondary ones.

In this case, it is too much of all comparisons with correction codes.

This data saving scheme rarely stalls, but it copes poorly with a large number of requests, is difficult to organize and has minor advantages over RAID 3. RAID 3 RAID level 3 - a high-capacity array with parallel I/O and one

additional disk

, which is recorded control information (Fig. 7).

When recording, the data stream is divided into blocks of at least bytes (although perhaps even bytes) and all disks in the array are written simultaneously, in addition to saving control information.

To calculate the control information (also called the control sum), the operation “turns on the ABO” (XOR) is performed, which is limited to the data blocks that are being written.

When you exit a disk, the data on the new one can be updated with the control data and data that was lost on the reference disks.

Let's take a look at the illustration of blocks of the same size as the beats.

Have several disks for saving data and one disk for recording checksums.

Since the sequence of bits 1101 0011 1100 1011 is divided into blocks according to the same bits, then to expand the check sum it is necessary to execute the operation:

Thus, the control amount that is written to the fifth disk is equal to 1001.

RAID level 4 is a data-resistant array of independent disks with one disk to save control sums (Fig. 8).

RAID 4 is similar in many ways to RAID 3, but differs from the remaining one in front of a larger block of data that is written (larger, smaller size of data that is written).

There is a major difference between RAID 3 and RAID 4. After writing a group of blocks, a check sum is calculated (the same as for the RAID 3 section) that is written to the disk for that disk.

However, larger, lower in RAID 3, the block size can be controlled simultaneously with several read operations (non-access scheme).

RAID 4 improves the productivity of small file transfers (per parallel reading operation).

If the fragments during recording must be accounted for in the control sum on the visible disk, instantaneous operations are not possible here (due to the obvious asymmetric nature of the operations introduced and deleted). The examined rhubarb will not ensure the advantage of fluidity when transferring data to a great ceremony. This saving scheme was developed for additions, given that the initial structure is divided into small blocks, so there is no need to further break them down.

RAID 4 is an unsuitable solution for file servers where information is read and written infrequently.

This scheme for saving data is of low efficiency, but its implementation is difficult, as updating data in the event of a failure.

The presence of an adjacent (physical) disk that stores information about the checksum, here, as in the three main levels, leads to the fact that reading operations, which do not require transferring to the disk, depend on the great swedishness.

However, each write operation changes the information on the control disk, so RAID 2, RAID 3 and RAID 4 schemes do not allow parallel write operations.

RAID 5 adds a small portion of the control sum to all disks in the array, which ensures the ability to handle multiple read and write operations simultaneously. Practical implementation.

For the practical implementation of RAID arrays, two storage units are needed: a storage array of hard drives and a RAID controller. The controller has the function of communicating with the server (workstation), generating excessive information at the time of recording and checking at the time of reading, distributing information from disks functioning algorithm. An important characteristic of RAID controllers is the number of channels that are supported for connecting hard drives.

Regardless of the fact that a bunch of SCSI disks can be connected to one controller channel, the total bandwidth of the RAID array will be limited by the bandwidth of one channel, as indicated

throughput capacity

SCSI interface.

For servers, in addition, there is a possibility of uninterrupted operation in the work between one and the same storage device. Uninterrupted operation of the robot is ensured by additional hot swapping, so that you can remove a faulty SCSI disk and install a new one without any disruption. In case of one faulty storage device, the disk subsystem continues to process the fragments (level 0), hot replacement will provide updated clearance for users.


However, transmission speed and access speed with one disk that does not work are significantly reduced because the controller is required to update data with redundant information.

It’s true that there is a culprit in this rule – RAID systems of ranks 2, 3, 4, when they get out of tune with the accumulation of over-the-top information, begin to pay more!

It is natural that in such a period the rhubarb “on the fly” changes to zero, which has miraculous characteristics. All this article is about hardware solutions. This is a program developed, for example, by Microsoft for Windows 2000 Server.

However, in this case, the cob economy is completely neutralized by additional advantages central processor In addition to its main job, it distributes data across disks and decomposes control sums. Such a decision may be more acceptable if the server has too much computational effort and little attention to detail. Sergiy Pakhomov central processor ComputerPress 3"2002

Today we'll talk about RAID arrays.

Let's figure out what it is that we need, what it is and how all this writing is to be vikorized in practice. So, in order: what is it?

RAID array Let's figure out what it is that we need, what it is and how all this writing is to be vikorized in practice. or just RAID? This abbreviation is deciphered as “Redundant Array of Independent Disks” or “redundant (reserve) array of independent disks.” To put it simply,

This is a collection of physical disks combined into one logical drive. Let's figure out what it is that we need, what it is and how all this writing is to be vikorized in practice. are created by the OS. Tobto. At the time of need, the operating system “reasons”, which contains a number of physical disks and only after the OS starts, for additional software security The disks will be assembled in the massif.

Naturally, the operating system itself does not evolve that way.

RAID arrays central processor, the fragments are installed before creation. "Is everything still needed?"- Will you feed V?

I confirm: to increase the speed of reading/writing data and/or increase resistance to damage and safety."In what rank I confirm: to increase the speed of reading/writing data and/or increase resistance to damage and safety. Can you increase the fluidity and keep your food safe?" - for this type of nutrition, we will look at the main types RAID arrays How stinks are formed and what results. I confirm: to increase the speed of reading/writing data and/or increase resistance to damage and safety. RAID-0 I confirm: to increase the speed of reading/writing data and/or increase resistance to damage and safety..
It is also called “Stripe” or “Strichka”. I confirm: to increase the speed of reading/writing data and/or increase resistance to damage and safety. Two or more hard disks are combined into one for the subsequent processing and obligations. I confirm: to increase the speed of reading/writing data and/or increase resistance to damage and safety. Tobto.

We can take two 500GB disks and make them soluble, the OS will be considered as one terabyte disk. We can take two 500GB disks and make them soluble In this case, the read/write speed of this array will be twice as high as that of one disk, for example, if the database is physically distributed on two disks, one user can read data from one disk, and the other one can read data from one disk. h work recording to another disk instantly . We can take two 500GB disks and make them soluble does not give a loss of speed, but ensures high visibility, leaving behind the death of one of the hard drives there is always a new duplicate of the information that is located on another drive.

In this case, it is necessary to remember that resistance to damage is ensured only in the event of the death of one of the disks in the array. If this data is removed completely, then the stench is cleared from all the disks of the array instantly! RAID-5 In this case, it is necessary to remember that resistance to damage is ensured only in the event of the death of one of the disks in the array.. In this case, it is necessary to remember that resistance to damage is ensured only in the event of the death of one of the disks in the array. The safest option for RAID-0. Obsessed the massif with the formula(N - 1) * DiskSize In this case, it is necessary to remember that resistance to damage is ensured only in the event of the death of one of the disks in the array. From three disks of 500 GB, we can extract an array with a volume of 1 terabyte.
The essence of the massif In this case, it is necessary to remember that resistance to damage is ensured only in the event of the death of one of the disks in the array. for those who have a number of disks in RAID-0, and for In this case, it is necessary to remember that resistance to damage is ensured only in the event of the death of one of the disks in the array. remaining disk This abbreviation is deciphered as “Redundant Array of Independent Disks” or “redundant (reserve) array of independent disks.” The so-called “control sum” is saved - service information intended to renew one of the disks in the array when it dies.
The speed of the entry in the array In this case, it is necessary to remember that resistance to damage is ensured only in the event of the death of one of the disks in the array. Much lower, about an hour is spent on decomposing and writing the checksum to the same disk, but the read speed is the same as that of RAID-0. This abbreviation is deciphered as “Redundant Array of Independent Disks” or “redundant (reserve) array of independent disks.” Like one of the disks in the array This abbreviation is deciphered as “Redundant Array of Independent Disks” or “redundant (reserve) array of independent disks.” dies, the read/write speed changes sharply, and all operations are accompanied by additional manipulations.
In this case, it is necessary to remember that resistance to damage is ensured only in the event of the death of one of the disks in the array. Factually is converted to RAID-0 and there is no need to update immediately There is a real risk of spending the money in full. In this case, it is necessary to remember that resistance to damage is ensured only in the event of the death of one of the disks in the array. Behind the massif

You can wiki about the Spare disk, then. spare This abbreviation is deciphered as “Redundant Array of Independent Disks” or “redundant (reserve) array of independent disks.” Near the hour of stable work You can wiki about the Spare disk, then. This disk is idle and not vicorized. However, once in a while there is a critical situation of renewal starts automatically - information from the corrupted disk is updated to the spare disk using check sums installed on the surrounding disk. You can wiki about the Spare disk, then. is created from at least three disks and stored as single batches.
Once appeared overnight You can wiki about the Spare disk, then. various pardons

on different disks doesn't complain. RAID-6- a shortened version of RAID-5. The essence is the same, only for the control sums there are no longer one, but two disks, and the control sums are based on the help of various algorithms, which essentially increases the viability of everything. on different disks A symbiosis of RAID-0 and RAID-1.

The array will consist of at least four disks: on the first RAID-0 channel, on the other RAID-0 to increase read/write speed, and between them in the RAID-1 mirror to increase the stability up to 100,000 disks. In such a manner combines plus the first two options - flexible and flexible. RAID-50 The array will consist of at least four disks: on the first RAID-0 channel, on the other RAID-0 to increase read/write speed, and between them in the RAID-1 mirror to increase the stability up to 100,000 disks.- similar to RAID-10 - a symbiosis of RAID-0 and RAID-5 - in fact there will be RAID-5, only its elements are not independent

Hard disks and the massivi is RAID-0.

In such a manner It gives even better read/write speed and matches the stability and reliability of RAID-5.і RAID-60- the same idea: in fact, we can use RAID-6, collected from several RAID-0 arrays. The array will consist of at least four disks: on the first RAID-0 channel, on the other RAID-0 to increase read/write speed, and between them in the RAID-1 mirror to increase the stability up to 100,000 disks.і Hard disks There are also other combined arrays

RAID 5+1 on different disks, The array will consist of at least four disks: on the first RAID-0 channel, on the other RAID-0 to increase read/write speed, and between them in the RAID-1 mirror to increase the stability up to 100,000 disks., Hard disks RAID 6+1 - smells similar to The difference is that the basic elements of the array are not RAID-0 stripes, but RAID-1 mirrors. I confirm: to increase the speed of reading/writing data and/or increase resistance to damage and safety., We can take two 500GB disks and make them soluble, In this case, it is necessary to remember that resistance to damage is ensured only in the event of the death of one of the disks in the array.і You can wiki about the Spare disk, then. How do you understand the combined RAID arrays: "Is everything still needed?".

and options "Is everything still needed?" RAID X+1

I confirm: to increase the speed of reading/writing data and/or increase resistance to damage and safety.є direct declines in basic types of arrays

We can take two 500GB disks and make them soluble and serve both read/write speed and fault tolerance, while maintaining its functionality of basic, old-fashioned types We can take two 500GB disks and make them soluble How to move on to practice and talk about the stagnation of these and others

In this case, it is necessary to remember that resistance to damage is ensured only in the event of the death of one of the disks in the array. in life, then the logic is simple: In this case, it is necessary to remember that resistance to damage is ensured only in the event of the death of one of the disks in the array. the pure-looking one did not have a vikorist look;

You can wiki about the Spare disk, then. It's a good idea where read/write speed is not particularly important, but what is important is resistance to readouts - for example,

on different disks install operating systems well. on different disks In this case, the disks do not require any installation of the OS, the hard disks themselves are sufficiently flexible for work, and resistance to damage is ensured;

We put it where you need fluidity and durability up to the point where it doesn’t cost money to buy a large number of hard disks or there is a need to renew the arrays whenever they become damaged, without using work - Spare disks will help us here. We can take two 500GB disks and make them soluble Zvichaine zastosuvannya In this case, it is necessary to remember that resistance to damage is ensured only in the event of the death of one of the disks in the array.- a shortened version of RAID-5. on different disks.

- a bunch of tributes; We can take two 500GB disks and make them soluble and the OS is installed on it, and any that are missing are merged into on different disks for Shvidka reliable robots From the data.

Having read everything, you decided to install it on your servers Let's figure out what it is that we need, what it is and how all this writing is to be vikorized in practice. If you don’t know how to work and where to start, come to us! - we will help you get better necessary possession "Is everything still needed?".

, and we will also carry out installation work with repair

Equal productivity of solutions within the same price level A simple fact: this is the Experience Index in Windows 7, which evaluates the productivity of the main PC subsystems for a typical solid state drive (SSD), and far from the most powerful (around 200 MB/s for reading and writing, casual access - 0.1 ms), shows a value of 7.0, while the indexes of all other subsystems (processor, memory, graphics, game graphics) on the same desktop systems based on older CPUs (with the current average 4 GB of DDR3-1333 memory and an average gaming video card on the desktop AMD Radeon

HD 5770) are estimated at values ​​greater than 7.0 (and itself - 7.4-7.8; this Windows 7 criterion has a logarithmic scale, so the difference in tens of parts translates into tens of hundreds of absolute values). This is a fast “off-the-shelf” SSD on the SATA bus, in Windows 7’s opinion, which is most suitable for today’s non-top desktop PCs. How is the productivity of the system disk affected (dead?) so that the “great and powerful” “Simka” respects its usefulness with other components of such a PC?.. :)

What are the main disadvantages of current SSDs?

If you don’t take into account the “long-lasting” superchargers from the drive of their reliability, durability and degradation over time, then there are two such shortcomings: low capacity and high performance.

Yes, an average 128 GB MLC SSD costs about 8,000 rubles. (The price at the time of writing the article; obviously, there is a lot of delay in the model, but the order of prices is still the same)., or just “Hard Drives”) are nowhere to be found in a PC, so why not combine them with RAID arrays?

Let's say that a simple RAID controller has been supplied to us, essentially, “without costs” - on the modern bridge of motherboards on AMD, Intel or Nvidia chipsets. For example, the same 8,000 rubles can be spent not on an SSD, but on 4 terabytes. It is clear that the array(s) will not have the opportunity to purchase a larger HDD to save data, so it will be saved. Or another option is to buy one SSD and one 2-3 TB disk at once, you can add 4 1.5-2 TB disks. Moreover, let's say, RAID 0 with four disks has not only four times the capacity, but also four times the linear read-write speed. But it’s already 400-600 MB/s, which is what a single SSD would be same prices

never dreamed of it! Thus, a similar array will be used much more than an SSD, for rent, for streaming data (reading/writing/editing video, copying great files and a lot of others. in).

Vikorist cards on Intel chipsets with the new bridge ICH8R/ICH9R/ICH10R (and later), where terabyte disks are optimally, in our opinion, organized at the next stage. Zavdyaki Intel technologies Matrix RAID with the first half of the disk volume is a RAID 0 array with a capacity of 2 TB (without any special tricks, meaning “operations” lower than Vista), which will ensure that we are as productive as possible, system sections fast launch additives and igor, as well as high liquidity operational robots

with multimedia and other content. And to more reliably save our important data to a friend, we combine half the volume of these disks into a RAID 5 array (by the way, it’s also far from the highest productivity, which is why we switch to a little less). In this manner, for less than 8 thousand. rub. We are removing a 2 TB supercharged system disk and a 1.5 TB “archive” volume. In this configuration, with two arrays created by us for our own purposes, we will carry out our further testing. However, especially dissatisfied non-fans of RAID5 on Intel controllers may instead use RAID10 for a second time of a smaller volume - its productivity for reading data will be lower than that of RAID5, when writing (with caching l) the smells are approximately equal in value, then the reliability and strength of the data when The collapse of the arrays will be shorter (in half of the RAID10 arrays it is possible to revive if two disks are out of order).

The Intel Matrix Storage Manager utility allows you to enable and disable write caching on such disk arrays in different ways

operating system

  • (tobto, vikorystyuchi
  • RAM PC), div. third row in the right field Information on the screenshot: This cache will dramatically speed up the processing of arrays with multiple files and data blocks, as well as the speed of writing to a RAID 5 array (which is sometimes even more critical).
  • Therefore, to be more precise, we conducted tests with caches turned on and off.
  • To complete this, we also need to lower the power supply to the processor when cache is turned on.

The testing was carried out by us on a test system, which represents a typical desktop that is not used for the last few hours: processor Intel Core 2 Duo E8400 (3 GHz); Seagate ST950042AS ran Windows 7 x64 Ultimate and Windows XP SP3 Pro (the arrays being tested and the arrays tested in a “clean” environment).

As benchmarks, based on the results of which we will judge the super size of SSDs with traditional RAIDs, we have tested the programs ATTO Disk Benchmark 2.41, Futuremark PCMark05, Futuremark PCMark Vantage x86, Intel NAS Performance Toolkit 1.7 etc. The tests were carried out five times and the results were obtained.

For orientation at the bottom of the diagram with test results, data is provided for a fast single drive Seagate Barracuda XT ST32000641AS with a capacity of 2 TB, the same as the “system” RAID 0 from the Hitachi Deskstar E7K1000 HDE7210, And we tried it.

An inexpensive, yet very productive SSD with a capacity of 128 GB and a price (at the time of writing) of around 8,000 rubles. The PNY Optima SSD 128GB MLC model was stolen. Right away we’ll marvel at her little report.


SSD PNY Optima 128GB Gen 2Model number P-SSD2S128GM-CT01 (firmware 0309) is a typical 2.5-inch SATA SSD in stylish blackmetal bodyzavtovshki 9.5 mm.This manufacturer is a company that mainly produces its flash drives and memory modules.

PNY Optima SSD 128

G.B.


MLCSSD PNY Optima 128GB Gen 2Model number P-SSD2S128GM-CT01 (firmware 0309) is a typical 2.5-inch SATA SSD in stylish blackmetal bodyzavtovshki 9.5 mm.This manufacturer is a company that mainly produces its flash drives and memory modules.

Accumulation of data on Intel 29F64G08CAMDB flash memory with MLC cards and JMicron JMF612 controller, which allows this SSD to operate not only via Serial ATA, but also via USB interface 2.0 (the remaining mini-socket is located in front of the SATA port at the rear end of the drive enclosure).

This solid-state accumulator can then be used as a shock-proof transfer device.

Unfortunately, the USB cable was not supplied with the kit.

The maximum speed of sequential reading and writing of data for SSD PNY Optima 128GB according to the results of the ATTO Disk Benchmark 2.41 test (reading a file with a volume of 256 MB in blocks of 64 KB to 8 MB) was 238 and 155 MB/s , what a little more value (div. diagram).

It’s worth noting that the low-level HD Tach RW 3.0 test, which uses a high-quality conversion to storage bypassing the file system, showed values ​​of 217 and 165 MB/s for these two parameters (div. graph). As for the several disk RAID arrays we tried, RAID 0 showed the maximum speed of reading/writing large files at 450 MB/s (as confirmed by the HD Tach RW 3.0 graphics), which is twice as high, Nizh at SSD! It’s true, record caching is enabled (WC=yes on diagrams)

by Windows

This greatly reduces the speed of the sequential recording, as well as readability, but not so critically that it may cause an unpleasant impression.


Well, before RAID 5, organized on the other half of our tested HDDs, the maximum speed of sequential reading of this array exceeds 270 MB/s (which is obviously higher than that of any modern magnetic hard drive!), and there is a sequential record that is fundamentally hidden from caching in Windows: without any reason, it reaches an absolutely unpleasant 40-50 MB/s, even though it is moving towards it most of the morning (as well as the HD Tach RW 3.0 graphics), although it still does not reach such a reading time for RAID 5, as This was the case with RAID 0. But, come what may, our RAID 5 performs significantly better than a single “seven-thousander” Seagate Barracuda XT.


Another inherent cost in Windows is caching of massive disks – radically speeding up work with small (less than 64 KB) files and data blocks.


This can be seen from the results of the ATTO Disk Benchmark 2.41 test (about the verticals, here are indications of the size of the data block in KB; the columns on the right are the speed values ​​in KB/s).


RAID 0 without caching

RAID 0 with cache

RAID 5 without caching

The buffered reading speed of 3-5 GB/s is a value of the same order of magnitude as the system memory bandwidth of a PC like our test one. The DMI bus, where Intel chipsets are connected to the system, has a very low potential, essentially equal to the bus PCI Express

x4 first generation (that is 1 GB/s one way).

Another interesting result from this diagram is that for RAID arrays (drives without caching), the speed of data transfer over the bus (including SATA buses) from the host to storage grows intellectually proportional to the number of disks in the array.

And for RAID 0, for example, the speed of data exchange with a single SSD on the SATA bus increases significantly.
The symbol, the lead, is completely obvious.
Before speech, the average hour of stepwise access to arrays (in fractional blocks) when reading does not lie in Windows cache, and the axis when writing changes completely (div. diagram).

Moreover, for the simplest (software) RAID 5 without caching it is obscenely large.

And for RAID 0, for example, the speed of data exchange with a single SSD on the SATA bus increases significantly.

The symbol, the lead, is completely obvious.
CPU consumption graphs with RAID caches

What's interesting is that for RAID 5 it is more important to use the processor, which is a little lower than for RAID 0 - perhaps it is more important high fluidity reading/writing on another page.

In addition, with a larger data block size, the demand on the processor decreases, approaching the same as when caching is enabled for blocks of size 64 KB and larger.

Absolutely, just an estimate, illustration of nutrition. This aspect could have been observed more scrupulously, from a “pure view”. But in this case, for us, it’s not a statistical method, but the amount of food that we need here is the productivity of hoarders. We still evaluated, closely, complex tests that simulate the work of various tasks under Windows – PCMark Vantage, PCMark05 and Intel NAS Performance Toolkit. Detailed results on the skin pattern of these tests are provided in the following table.

And in this case, we can only imagine sub-diagrams that give indications about the average productivity of users running Windows.

Test PCMark05

A further analysis of the results for the patterns (div. table) shows that “not everything is different” - in some cases, our RAID 0 does not only have similar speed with the SSD (Movie Maker, for video editing), but also can significantly compress it ( Media Center). Therefore, it is better to use a larger array for the media center, such as an SSD (which also has a much larger capacity). Keshuvannya here also adds 20-30% to the average productivity of the massifs, as a matter of fact

software RAID 5 are completely competitive with a single top-end “two-terabyte”. The new and, in our opinion, more realistic test of the Intel NAS Performance Toolkit, which follows another philosophy of benchmarking, the “tracks” of PCMark, is a close-up of the work with

file system

test storage device, rather than issuing previously recorded (in another system) commands to approximate the disk in the middle of a previously created time file - the situation is even more similar to multi-disk RAID.

In the middle, our RAID 0 here outperforms the solid-state drive not only with caches (at the same time!), but without it! And the software “archive” RAID 5 with caching appears larger than a single Barracuda XT disk. Upon closer examination (the dividing table), it appears that 10 out of 12 RAID 0 paterns are cached, swedish, lower SSD! There are also video processing, Content Creation (content creation), office processing, photo processing (Photo Album), and file copying. Moreover, with 4-streaming, created videos and copied directories with a lot of files from a solid-state drive are created over RAID 0 from traditional hard drives.

And a few more remarks - the energy and reliability of these solutions is good. Insanely, the 0.5-3 W efficiency of one SSD does not compare with the 20-40 W efficiency of an array with four HDDs. However, we are not looking at a laptop / nettop, but a full-fledged desktop (otherwise, more importantly, such a RAID and no fuss).

Therefore, life needs to be assessed by the sum.

And aphids have the greater efficiency of typical desktop processors (100-200 W at a time from motherboard) and a video card (50-300 W) another couple of tens of watts on accumulative storage is not at all a waste of money (only those who are paranoid will be interested in spending a few kilowatts on their home electrical appliance :)). It’s best to remember that before the SSD you still have to buy one or two HDDs (to estimate: 20W·8hour·30days=4.8kWh, so a maximum of 15-20 additional rubles for electrical equipment per month). As for the reliability of both solutions, then before the SSD, and before the RAID on the chipset controllers, and up to the HDD in Merezhi, you can find numerous claims, although the distributors are charging them with millions of MTBFs.

It’s best to remember that before the SSD you still have to buy one or two HDDs (to estimate: 20W·8hour·30days=4.8kWh, so a maximum of 15-20 additional rubles for electrical equipment per month).



Therefore, in any case, the greatest protection from wasting data is regularly making reservations on non-dependent vehicles. And don’t forget about the price., connected to the built-in controller, or to the external controller, and SAN systems (storage, storage) can be selected.

However, in all implementation methods, disks are combined into logical pools, which are called RAID arrays. Such a decision may be more acceptable if the server has too much computational effort and little attention to detail. Thus, it is important to ensure the nutrition and security of your data.

Whenever one of the disks of the logical array fails, the service will continue to run without interruption, and without wasting data.

And also sharing disks in a pool can improve the productivity of the pool, for example, RAID 0 significantly increases read speed, and at the same time increases the ability to get the array out of order.

Otje, – This is a data virtualization technology that combines a number of disks into a logical element to increase resistance to damage and increase productivity. IOPS

p align="justify"> An important indicator of the productivity of the disk subsystem is the number of elementary operations per hour (IOPS) that can waste a disk. For the disk subsystem, this operation involves reading and writing data.
When planning a service for the disk subsystem, it is important to determine what kind of service is being provided to the disk subsystem. 100
Make sure that such values ​​are derived by the empirical way, which is why we have already denied evidence on such projects. 140
Therefore, depending on the number of disks and the type of RAID array, it is important to pay attention to IOPS requirements. 210
It is important to note that the total number of IOPS is respected, which then needs to be divided into read operations and write operations, so, for example, on DBMS servers the share will be 80% for writes, 20% for reads, and 20% for writes. 8600

file servers

Also for IOPS for RAID arrays, penalties were applied for the skin type of array. For example, RAID 1 requires two operations to write data on one disk and another disk, so there is a penalty of 2. RAID 5 requires 4 operations to write data: read data, read RAID parity, write data, write parity, penalty become 4. For arrays 50, 60, 61, the cumulative effect for warehouses RAID array

iv.

Raid penalty values ​​are shown in Table 2.

Types of RAID arrays
There is a list of the widest variety of RAID arrays (see Table 2).
Diagram (clickable) Number of disks Number of discs that went wrong The speed of the record Reading speed
Description RAID penalties view 2 1
Description 1 No The information is divided into data blocks of a fixed date and recorded on a number of disks. 2

Not changeable, just one disk. 1 Data is written to one disk and another (mirrored). 4

view 3 2 Data blocks and control sums are cyclically written to all disks in the array. 6

view 3 view 4 Data blocks and check sums are cyclically written to all disks in the array, or two check sums. 2

from 1 to N/2 disks between different mirrors. A mirror array in which data is written sequentially onto a number of disks, like RAID 0. This architecture is an array of RAID 0 type, with segments that replace adjacent disks in a RAID 1 array. view 6 4

from 1 to 2 disks, as you get the same number of disks for different stripes.

An array in which a number of disks are written sequentially, like RAID 0. However, its segments replace adjacent disks in a RAID 5 array.

Table 2. The widest types of RAID arrays

RAID 60 and 61 are combinations of RAID arrays 0 and 1 in a row, with segments that replace adjacent disks in RAID 6 arrays. Such arrays contain all the advantages and disadvantages of their RAID arrays.

In fact, the widest RAID arrays are RAID 1, RAID 5 and RAID 10.

Indicators of disk subsystem productivity

The productivity of the disk subsystem must be verified based on the following indicators:

Shows the hour the disk is idle, then.

An hour later, the disk lost its peace of mind without interrupting read/write operations.

In front of the front display, lie strictly in the range from 100% (very calm) to 0% (very calm).

Zvernen to disk

This indicator still shows the amount of IOPS.

The limit values ​​are also indicated in the breakdowns.

The display can be detailed on the disk under the hour of recording and reading.

The middle hour of the clock to disk

The average hour in seconds required for a disk to complete one read and write operation.

It consists of the value of the hour per hour of reading and the hour per hour of writing.

Mid-Dovzhina chergi disk

The average disk activity shows the number of disk operations completed during the current period of time.

This value is calculated on the basis of Little's law, based on the number of requests that need to be processed, at the average current frequency of requests, multiplied by the hour of processing the request.

The production line of the disk chergi

Shows the number of disk operations that are currently being processed at the current time.



Fluidity of exchange with the disk

The value shows the average number of read/write bytes written to the disk per second.

  • Average size of one disk exchange
  • The number of bytes that are transferred per IOPS.
  • Indicated as the arithmetic mean for the period.

I/O split to disk