登录 注册
当前位置:主页 > 资源下载 > 21 > Using.Flume.Flexible.Scalable.and.Reliable.Data.Streaming.pdf下载

Using.Flume.Flexible.Scalable.and.Reliable.Data.Streaming.pdf下载

  • 更新:2024-05-23 13:13:34
  • 大小:4.76MB
  • 推荐:★★★★★
  • 来源:网友上传分享
  • 类别:互联网 - 行业
  • 格式:PDF

资源介绍

How can you get your data from frontend servers to Hadoop in near real time? With this complete reference guide, you’ll learn Flume’s rich set of features for collecting, aggregating, and writing large amounts of streaming data to the Hadoop Distributed File System (HDFS), Apache HBase, SolrCloud, Elastic Search, and other systems. Using Flume shows operations engineers how to configure, deploy, and monitor a Flume cluster, and teaches developers how to write Flume plugins and custom components for their specific use-cases. You’ll learn about Flume’s design and implementation, as well as various features that make it highly scalable, flexible, and reliable. Code examples and exercises are available on GitHub. Learn how Flume provides a steady rate of flow by acting as a buffer between data producers and consumers Dive into key Flume components, including sources that accept data and sinks that write and deliver it Write custom plugins to customize the way Flume receives, modifies, formats, and writes data Explore APIs for sending data to Flume agents from your own applications Plan and deploy Flume in a scalable and flexible way—and monitor your cluster once it’s running Table of Contents Chapter 1. Apache Hadoop and Apache HBase: An Introduction Chapter 2. Streaming Data Using Apache Flume Chapter 3. Sources Chapter 4. Channels Chapter 5. Sinks Chapter 6. Interceptors, Channel Selectors, Sink Groups, and Sink Processors Chapter 7. Getting Data into Flume* Chapter 8. Planning, Deploying, and Monitoring Flume