Our client is a provider of technology-managed services and product reseller. The company was established in the 1980s and was later acquired by one of the biggest investment management companies in the world. It is currently headquartered in California and has over 8 000 employees. Our client specializes in providing Managed Workplace services including IT solutions and hardware, integration and support. The company has some key partnerships with technology companies such as HP, IBM, Cisco, Apple, etc.
Our client wanted to duplicate their ServiceNow data to other two databases – Oracle and Snowflake. The challenge with that project was migrating huge volumes of data efficiently. Some data tables consisted of over a hundred million records. In such a case sharing all the information at once is not an option. Instead, we had to divide the data into specific pieces and migrate them separately.
For this project, we estimated a four months time frame to deliver it. In the beginning, we established the KPIs of the project which were mainly focussed on dividing the huge data sets into smaller pieces. Fortunately, the client had the required understanding and cooperated fully for the proper separation of the data.
The project was executed in stages. The approach was the following: migrate the data from the past six months plus the incremental data and afterward start sharing the rest of the data into smaller pieces – around ten million records per day for a given table. So as a first step, we shared the data that has been generated for the past 6 months. Next, we started the so-called incremental loader – meaning the synchronization between the different databases. The division of the data was done in cooperation with the client. We relied on the Perspectim DataSync tool for migrating and dividing the data tables.
In order to ensure the proper quality of the process we needed to carefully monitor each step of the project. For such a big migration there are almost always records that are corrupted. In cases where there were such errors, we had to check manually, if everything is okay. One of the most common problems was identifying missing data from a certain row. When that happened we check manually where the problem occurred. Once the record is found it is reshared again. However, if there is a more specific problem, such as for example failure in the encryption we investigate the issue, add an additional logic that would clean the data records that failed to be shared and reshare it again.
Overall, we have migrated around fifty million records per week. The final result was that the client had all the data sets transferred to the two new databases and could operate that data for future business intelligence purposes, such as reporting and analytics in an easy and affordable manner.