Question-6: As a Senior Solution Architect, you are working with a data warehousing team that does data analysis. There is a BigData team that has recruited consultants from HadoopExam.com, and this team works alongside the BigData team. However, the data includes personally identifiable information, and the team needs to analyze the data that was provided by external partners (PII). You need to process the data and store it, but you can't keep any of the personally identifiable information or PII data. What is it that you ought to do?
A. In order to get the data from the external sources, you must first create a Dataflow pipeline. Remove any personally identifiable information from the pipeline by using the Cloud Data Loss Prevention (Cloud DLP) API. Put the result in the BigQuery database.
B. In order to get the data from the external sources, you must first create a Dataflow pipeline. As a component of the pipeline, save all of the data that does not contain personally identifiable information in BigQuery, and save all of the data that does contain PII in a Cloud Storage bucket that has a retention policy configured.
C. Request that all data be uploaded to Cloud Storage from the external partners. Set up the Bucket Lock configuration for the bucket. To get the data from the bucket, you must first create a Dataflow pipeline. Remove any personally identifiable information from the pipeline by using the Cloud Data Loss Prevention (Cloud DLP) API. Put the result in the BigQuery database.
D. Request that all data be imported into your BigQuery dataset from the external partners. Create a dataflow pipeline to copy the data into a new table. Skip over any and all data in columns that contain personally identifiable information (PII) as part of the Dataflow bucket.

Get All 340 Questions and Answer for Google Professional Cloud Architect