Ready for your storage coming in the age of AI?


  • 27 March 2020

The development and evolution of the enterprise's internal storage architecture is a long-term continual process. IT planners may consider artificial intelligence (AI) as a transformation project that will only need to be invested in the next few years. However, the AI wave is coming faster than expected, and more and more industries will use AI to drive business change. On the other hand, the workload of AI work is different from any previously processed IT load. AI workloads have completely new features. They face massive unstructured data sets and require extremely high random access performance, extremely low latency, and large-scale storage capacity.

Ready for your storage coming in the age of AI?

AI fundamentally changes how existing organizations conduct business

AI will not only create entirely new industries, but will also fundamentally change the way existing organizations conduct business. IT planners need to immediately start looking at whether their storage infrastructure is ready for the upcoming AI wave.

What requirements does AI place on storage?

When answering any AI-oriented storage solutions, we need to first understand the characteristics of data under artificial intelligence. Based on these data, what kind of storage is needed? Through layer-by-layer analysis, we will finally filter out the comprehensive demands of the AI business for storage.

Massive unstructured data storage

Except for individual business scenarios, which mainly analyze structured data (such as consumption records, transaction records and other risk control and trend prediction scenarios), most scenarios need to deal with unstructured data, such as image recognition, voice recognition, and automatic Driving, etc. These scenarios usually use deep learning algorithms and must rely on the input of massive pictures, voice, and video.

Ready for your storage coming in the age of AI?

Data sharing access

Multiple AI compute nodes need to share access data. Since the AI architecture needs to use a large-scale computing cluster (GPU server), the data accessed by the servers in the cluster comes from a unified data source, that is, a shared storage space. This shared access to data has many benefits, it can ensure the consistency of accessing data on different servers, and reduce data redundancy caused by separately retaining data on different servers.

So which interface provides shared access?

Block storage needs to rely on upper-level applications (such as Oracle RAC) to implement mechanisms such as collaboration, locking, and session switching in order to realize the sharing of block storage devices among multiple nodes, so it is not suitable for direct use in AI applications.

Object storage and file storage can usually achieve shared access. From the interface level of data access, it seems that data sharing can be achieved. But which interface is more convenient, we need to take a closer look at how AI's upper-level application framework uses storage.

Therefore, from the perspective of the AI application framework, the file interface is the most friendly storage access method.

Ready for your storage coming in the age of AI?

Read more and write less, high throughput, low latency

AI data is characterized by read more and write less, requiring high throughput and low latency. During deep learning process training, the data needs to be trained. Taking visual recognition as an example, it needs to load tens of millions or even hundreds of millions of pictures, and use algorithms such as convolutional neural networks and ResNet for the pictures to generate recognition models. After completing a round of training, in order to reduce the impact of the correlation of the picture input order on the training result, the file order is shuffled, and then reloaded to train multiple rounds (each round is called epoch). This means that each epoch needs to load tens of millions and hundreds of millions of pictures according to the new order. The speed of reading pictures, that is, the delay, will greatly affect the length of time it takes to complete the training process.

As mentioned earlier, both object storage and file storage can provide shared data access for GPU clusters, so which storage interface can provide lower latency? Industry-leading international high-performance object storage, read latency is about 9ms, and high-performance file system latency is usually 2-3ms. Considering n loads of hundreds of millions of pictures, this gap will be enlarged to a serious impact AI training efficiency.

From the perspective of file loading, high-performance file systems have also become the first choice for AI in terms of latency.

IO pattern is complicated

Large files, small files, sequential read, random read mixed scenarios. The data corresponding to different business types have different characteristics, such as visual recognition, usually processing small files below 100KB; speech recognition, most large files above 1MB, for these independent files, sequential reading is used. And some algorithm engineers will aggregate hundreds of thousands or even tens of thousands of small files into a large file of hundreds of GB or even TB level. In each epoch, these large files are processed according to a sequence randomly generated by the framework. Read randomly.

Under the background that the type of file Daxiao IO cannot be predicted, the high-performance support for complex IO features is also the storage requirement of the AI business.

Ready for your storage coming in the age of AI?

AI business containerization

The AI application business is gradually migrated to the Kubernetes container platform. Data access naturally makes the AI business most convenient to use in the container platform. It is very easy to understand this. In the era of single business operation, data is placed on the disk that is directly connected to the server, which is called DAS mode. In the era of business running in clusters composed of multiple physical machines, in order to uniformly manage and facilitate the use of data, data is stored on SAN arrays. In the cloud era, data has subsequently been placed on the cloud, and distributed storage and object storage suitable for cloud access. It can be seen that data always needs to be stored and managed in the most convenient way for business access. Then in the container era and cloud native era, data should naturally be placed on the most convenient storage for cloud native application access and management.

Operating platform evolves to public cloud

Public cloud has become the preferred or preferred operating platform for AI services, while public cloud-native storage solutions are more oriented to general-purpose applications. There are certain deficiencies in the high-throughput, low-latency, and large-capacity requirements of AI services. Most AI businesses have a certain tidal nature. The public cloud is elastic and pay-on-demand. Coupled with the maturity and use of public cloud high-performance GPU server products, public cloud computing resources have become the first to reduce the cost and increase the efficiency of the AI business. The public cloud storage solutions that match the AI business and have the characteristics described above are still missing. In recent years, we have seen that some foreign storage vendors (such as NetApp, Qumulo, ElastiFile, etc.) have released and run their products on the public cloud. The public cloud's native storage products and solutions are missing from the user's specific business application requirements. Confirmation and interpretation. Similarly, the landing of a storage solution suitable for AI applications on the public cloud is to solve the last mile of AI's further landing on the public cloud.

Which existing AI storage solutions can meet the needs of the above-mentioned large-scale application of AI?

Data is directly stored in the SSD of the GPU server, which is the DAS method. This method can ensure high bandwidth and low latency for data reading. However, compared with other methods, the disadvantages are more obvious, that is, the data capacity is very limited. At the same time, the performance of the SSD or NVMe disk cannot be fully utilized (usually (The performance utilization of high-performance NVMe is less than 50%), SSDs between different servers form islands, and data redundancy is very serious. Therefore, this method is rarely used in real AI business practice.

Shared scale-up storage arrays are the most common and probably the most familiar of the available sharing solutions. Like DAS, shared storage arrays have similar disadvantages. Compared to traditional workloads, AI workloads actually expose these disadvantages faster. The most obvious is how much total data can the system store? Most traditional array systems can only grow to almost 1 PB of storage per system, and because most large-scale AI workloads will require tens of PBs of storage, enterprises can only continue to purchase new storage arrays, resulting in data silos. The generation. Even if capacity challenges are overcome, traditional array storage can cause performance issues. These systems usually can only support a limited number of memory controllers, the most common being two controllers, and the typical AI workload is highly parallel, which can easily overwhelm small controllers.

Ready for your storage coming in the age of AI?

Users usually use GlusterFS, CephFS, and Lustre. The primary problem of open source distributed file systems is the complexity of management and operation and maintenance. Second, GlusterFS and CephFS are difficult to guarantee the performance of massive small files and large-scale and large-capacity backgrounds. Considering the high GPU price, if the data access cannot be given sufficient support, the GPU's input-output ratio will be greatly reduced, which is the last thing that managers of AI applications want to see.

Establish a file access interface gateway on the object store. First of all, object storage has a natural disadvantage to random writes or append writes, which can cause poor write support in AI services. Second, the disadvantage of object storage in read latency is once again magnified after passing through the file access interface gateway. Although some data can be loaded on the front-end SSD device through read-ahead or cache, this will bring the following problems: 1) The upper-layer AI framework needs to adapt to the special architecture of the lower layer. Intrusiveness, such as executing read-ahead programs; 2) It will cause uneven data loading speed. During the data loading process or when the front-end SSD cache misses, GPU utilization drops by 50% -70%.

The above solutions are only from the perspective of data scale scalability, access performance, and the versatility of the AI platform. They are not ideal AI-oriented storage solutions.

Sum Up

Through analysis, we hope to be able to provide planners of the AI business with observations and insights on the actual storage needs of the AI business, help customers land in the AI business, and provide optimized solutions for AI storage products. After AI will become the information industry revolution, it will once again change the technology and direction of the world. The AI wave has come to us by accident. It is time to consider a new type of storage for AI.