Question-43: An operations engineer working for QuickTechie Inc. has the goal of developing a low-cost system that can remotely archive copies of database backup files. The data centre that they are currently using stores the database files in a compressed tar file format. Flash drives, often known as thumb drives or USB drives/sticks, and solid-state drives (SSD) The most effective physical methods of backing up your system are solid-state drives (SSDs) and flash drives. Flash drives and solid state drives employ a technology called flash to write and retrieve data very fast, which enables them to create backups of data in a short amount of time. What course of action should he take?
A. To copy the files to a Coldline Storage bucket, you should first create a cron script that uses gsutil.
B. To copy the files to a Regional Storage bucket, you should first create a cron script that uses gsutil.
C. In order to copy the files into a Coldline Storage bucket, you will need to create a Cloud Storage Transfer Service Job.
D. You will need to create a task in the Cloud Storage Transfer Service in order to transfer the files to a Regional Storage bucket. This may be done by following the instructions given above.
Correct Answer

Get All 340 Questions and Answer for Google Professional Cloud Architect

: 3 Explanation: Especially, when Google docs explicitly states, that custom scripts are unreliable, slow, insecure, difficult to maintain and troubleshoot. A Storage Transfer Service has many valuable features but it comes with some dependencies such as - min 300-Mbps internet connection - A docker engine on-prem (app runs inside a container) Based on the recommendations here: command seems to be a better option in a cron job with regular intervals as it will be much easier to implement compared to setting up Storage Transfer Service. We are talking about potentially 100s of TBs of data based on the case study (at least 65TBs as that's how much they are using in their NAS storage for backups/logs). I certainly hope they have the minimum 300-Mbps connection and a computer in their data center that they can install docker on. As per the latest case study on google cloud website , they have DB storage of 1 PB out of which 600 TB is used. So you get the size of the data. These are the thumb rules as per GCP documentation - Transfer scenario Recommendation Transferring from another cloud storage provider Use Storage Transfer Service Transferring less than 1 TB from on-premises Use gsutil Transferring more than 1 TB from on-premises Use Transfer service for on-premises data https://cloud.google.com/storage-transfer/docs/overview Here are the guidelines from Google: From Azure/AWS Transfer: Storage Transfer Service Between two different bucket: Storage Transfer service For less than 1 TB From Private datacenter to Google: gsutil For more than 1 TB with enough bandwidth for Private datacenter to Google - Use Storage Transfer Service for on-premises data Not enough bandwidth to meet project deadline for private data center to Google for more than 1 TB - Transfer Appliance. (Transfer Appliance is recommended for data that exceeds 20 TB or would take more than a week to upload) I assume the DB size will be more than 1 TB. (2 million TerramEarth vehicles each generate generates 200 to 500 megabytes of data per day). Follow these rules of thumb when deciding whether to use gsutil or Storage Transfer Service: Transfer scenario Recommendation Transferring from another cloud storage provider Use Storage Transfer Service. Transferring less than 1 TB from on-premises Use gsutil. Transferring more than 1 TB from on-premises Use Transfer service for on-premises data. Transferring less than 1 TB from another Cloud Storage region Use gsutil. Transferring more than 1 TB from another Cloud Storage region Use Storage Transfer Service.