Question-2: Engineers who work on site dependability reside at the intersection of conventional information technology and software development. In its most basic form, SRE teams are made up of software engineers who design, develop, and deploy software in order to increase the dependability of their respective systems. Your company's application reliability team just implemented a debug feature to their backend service. This feature will cause all server events to be sent to Google Cloud Storage in order to be analyzed at a later time. The event records have a minimum size of 50 KB and a maximum size of 15 MB, and it is anticipated that the rate of events would reach a high of 3,000 per second. You want to ensure as little data is lost as possible. Writing code and putting it into production are both aspects of DevOps. SRE, on the other hand, is a more thorough approach, in which the team works on the system while taking into account a larger range of 'end-user' perspectives. An agile methodology is used by a DevOps team when they work on a product or app. Which of the processes should you put into action?
A. Add metadata to the file body; Individual files that are to be compressed; Name files with serverName Timestamp; If the bucket is older than an hour, make a new one and save each file to it; if there is already a bucket, save the files there; if there isn't already a bucket, save the files where they are;
B. Batch every 10,000 events using a single manifest file for metadata. Event files and the manifest file should be compressed into a single archive file. Use serverName to name files EventSequence-> If the bucket is older than one day, make a new one and save the single archive file to it. If not, save the single archive file to the bucket that is already there. Use serverName to name files EventSequence-> If the bucket is older than one day, make a new one and save
C. Compress the files one at a time Using the serverName as the file name EventSequence Saving the files in a single bucket After saving, add your own metadata headers to each object.
D. Add metadata to the body of the file Individual files to be compressed Using a random prefix pattern to name files -> Put files in one location D. Add metadata to the body of the file
Correct Answer
Get All 340 Questions and Answer for Google Professional Cloud Architect
: 4 Explanation: When ramping up to very high read and write speeds, an increased length of the randomised prefix results in more efficient autoscaling. For instance, a one-character prefix that employs a random hex value can provide effective auto-scaling from an initial 5000/1000 reads/writes per second up to approximately 80000/16000 reads/writes per second. This is possible due to the fact that the prefix can take on any one of 16 different forms. If your use case does not require higher rates than this, a randomised prefix that is only one character long is just as effective at ramping up request rates as a randomised prefix that is two characters long or longer. Here's an example: my-bucket/2fa764-2016-05-10-12-00-00/file1 my-bucket/5ca42c-2016-05-10-12-00-00/file2 my-bucket/6e9b84-2016-