In the era of big data, data is the foundation and business is the core. Data security must be related to business form. Therefore, data security and border network security are gradually separated. From the perspective of the overall informatization development, the importance of data security is relatively lagging. Most industries have been operating information systems for many years. It is still very difficult to carry out data security related work based on this. In particular, the control of highly sensitive data in the industry, such as star data, high-level leadership data, senior management data, etc. These data are generally mixed with the data of ordinary people. Full control will affect the convenience of business. If not controlled, once the data is leaked, the enterprise will suffer great losses.

Analysis of existing highly sensitive data control scheme

After entering the era of big data, the value of data is getting higher and higher. Driven by interests, similar events occur from time to time. So, how can we strengthen control over the information of such highly sensitive personnel? On this matter, I visited and investigated several data security manufacturers, collected three schemes, and analyzed the disadvantages.

Scheme 1: separately deploy a set of applications, such as “VIP system”, so as to isolate highly sensitive data from ordinary sensitive data, protect them in a targeted manner, and be maintained by a special team to reduce the scope of data use.

Such a highly sensitive data control scheme actually has some disadvantages. The first problem is the increase in operating costs. On the one hand, a set of software and hardware resources need to be invested, and on the other hand, personnel need to be organized for operation and maintenance;

At the same time, there will be repeated investment. In order to ensure security, it is often necessary to configure data security products and capabilities for these two systems, which leads to a large amount of repeated investment;

In addition, from the perspective of value maximization, under such a control mode, application systems with highly sensitive data generally dare not easily provide services to the outside world, which invisibly forms a data island, which is not conducive to the development of data value;

Scheme 2: mark the highly sensitive data, determine the sensitivity level of the data when the data is generated, and mark the data. In this way, it is possible to clearly identify the highly sensitive data, grant targeted authorization, add some security means such as data encryption or data desensitization, and take corresponding security technical measures to protect the data when it is used.

Obviously, this method will not cause repeated investment, and the investment cost will be much lower. If the authority is well controlled, the highly sensitive data can still be shared externally. However, this scheme will lead to a large amount of application system transformation, judgment is required during data generation, independent processing is required during authorization, and data encryption or data desensitization is required during use. These changes will be very large, and even the top-level design will be changed;

On the other hand, high sensitive data, like ordinary sensitive data, has many use scenarios, such as data update, deletion, analysis, verification, query, etc. these scenarios will contact high sensitive data, and adopting this scheme will affect the convenient use of data;

There are also obvious performance losses due to complex logic. Each time sensitive data is used, it needs to be judged whether it is ordinary data, ordinary sensitive data or high sensitive data. After the judgment, it needs to call the corresponding security interface to process the data. When the data access peak occurs, it may cause downtime;

In addition, in order to ensure security and reduce the risk of leakage, it is necessary to encrypt and desensitize highly sensitive data, which will further reduce the loss of performance and the convenience of data use.

Scheme 3: anonymize the highly sensitive data. The anonymized data will not affect the use. At the same time, the highly sensitive data will be well protected. In order to ensure that the anonymized data can be reversed to the original data when necessary, the corresponding relationship can be preserved.

Compared with the first two schemes, the third scheme has the smallest impact on the data business, and the amount of application changes is not large. It seems to be the most appropriate, but it has a fatal problem: in order to ensure that the anonymous data can be reversed, the corresponding relationship needs to be protected. Once the corresponding relationship is tampered with or deleted, the data will never be recovered;

In addition, this will affect the value of the data. The anonymized data can be well prevented from being leaked, but it also hinders the application of the data. Some targeted service functions will be difficult to achieve. If the reverse processing is carried out every time, the frequent reverse processing of the data will still increase the probability of leakage. Vinchin offers solutions for the world’s most popular virtual environments, such as VMware backup, XenServer backup, XCP-ng backup, Hyper-V backup, RHV backup, oVirt backup, Oracle backup, etc.

Previous articleSeamless quality management software provider
Next article6 Things You Need To Consider Before Buying A Hair Topper

LEAVE A REPLY

Please enter your comment!
Please enter your name here