Q112. A company has a large on-premises Apache Hadoop cluster with a 20 PB HDFS database. The cluster is growing every quarter by roughly 200 instances and 1 PB. The company's goals are to enable resiliency for its Hadoop data

欢迎免费使用小程序搜题/刷题/查看解析,提升学历,成考自考报名,论文代写、论文查重请加客服微信skr-web


Q112. A company has a large on-premises Apache Hadoop cluster with a 20 PB HDFS database. The cluster is growing every quarter by roughly 200 instances and 1 PB. The company's goals are to enable resiliency for its Hadoop data limit the impact of losing cluster nodes and significantly reduce costs. The current cluster runs 24/7 and supports a variety of analysis workloads including interactive queries and batch processing. Which solution would meet these requirements with the LEAST expense and down time?

A.Use AWS Snowmobile to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle the interactive workload based on historical data from the on- premises cluster. Store the data on EMRFS. Minimize costs using Reserved Instances for master and core nodes and Spot Instances for task nodes and auto scale task nodes based on Amazon CloudWatch metrics. Create job-specific optimized clusters for batch workloads that are similarly optimized.
B.Use AWS Snowmobile to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster of similar size and configuration to the current cluster. Store the data on EMRFS. Minimize costs by using Reserved Instances. As the workload grows each quarter purchase additional Reserved Instances and add to the cluster.
C.Use AWS Snowball to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle the interactive workloads based on historical data from the onpremises cluster. Store the on EMRFS. Minimize costs using Reserved Instances for master and core nodes and Spot Instances for task nodes and auto scale task nodes based on Amazon
CloudWatch metrics. Create job-specific optimized clusters for batch workloads that are similarly optimized. D.Use AWS Direct Connect to migrate the existing cluster data to Amazon S3. Create a persistent Amazon EMR cluster initially sized to handle the interactive workload based on historical data from the on-premises cluster. Store the data on EMRFS. Minimize costs using Reserved Instances for master and core nodes and Spot Instances for task nodes and auto scale task nodes based on Amazon CloudWatch metrics. Create job-specific optimized clusters for batch workloads that are similarly optimized.
正确答案A
访客
邮箱
网址

通用的占位符缩略图

人工智能机器人,扫码免费帮你完成工作


  • 自动写文案
  • 自动写小说
  • 马上扫码让Ai帮你完成工作
通用的占位符缩略图

人工智能机器人,扫码免费帮你完成工作

  • 自动写论文
  • 自动写软件
  • 我不是人,但是我比人更聪明,我是强大的Ai
Top