Free shipping on all orders in the USA!

Apache Spark 2.x for Java Developers: Explore big data at scale using Apache Spark 2.x Java APIs

ISBN: 9781787126497
Publisher: Packt Publishing - ebooks Account
Publication Date: 2017-07-26
Number of pages: 350
  • Sale
  • Regular price $55.75

Any used item that originally included an accessory such as an access code, one time use worksheet, cd or dvd, or other one time use accessories may not be guaranteed to be included or valid. By purchasing this item you acknowledge the above statement.


Key FeaturesPerform big data processing with Spark—without having to learn Scala!Use the Spark Java API to implement efficient enterprise-grade applications for data processing and analyticsGo beyond mainstream data processing by adding querying capability, Machine Learning, and graph processing using SparkBook Description

Apache Spark is the buzzword in the big data industry right now, especially with the increasing need for real-time streaming and data processing. While Spark is built on Scala, the Spark Java API exposes all the Spark features available in the Scala version for Java developers. This book will show you how you can implement various functionalities of the Apache Spark framework in Java, without stepping out of your comfort zone.

The book starts with an introduction to the Apache Spark 2.x ecosystem, followed by explaining how to install and configure Spark, and refreshes the Java concepts that will be useful to you when consuming Apache Spark's APIs. You will explore RDD and its associated common Action and Transformation Java APIs, set up a production-like clustered environment, and work with Spark SQL. Moving on, you will perform near-real-time processing with Spark streaming, Machine Learning analytics with Spark MLlib, and graph processing with GraphX, all using various Java packages.

By the end of the book, you will have a solid foundation in implementing components in the Spark framework in Java to build fast, real-time applications.

What you will learnProcess data using different file formats such as XML, JSON, CSV, and plain and delimited text, using the Spark core Library.Perform analytics on data from various data sources such as Kafka, and Flume using Spark Streaming LibraryLearn SQL schema creation and the analysis of structured data using various SQL functions including Windowing functions in the Spark SQL LibraryExplore Spark Mlib APIs while implementing Machine Learning techniques to solve real-world problemsGet to know Spark GraphX so you understand various graph-based analytics that can be performed with SparkAbout the Author

Sourav Gulati is associated with software industry for more than 7 years. He started his career with Unix/Linux and Java and then moved towards big data and NoSQL World. He has worked on various big data projects. He has recently started a technical blog called Technical Learning as well. Apart from IT world, he loves to read about mythology.

Sumit Kumar is a developer with industry insights in telecom and banking. At different junctures, he has worked as a Java and SQL developer, but it is shell scripting that he finds both challenging and satisfying at the same time. Currently, he delivers big data projects focused on batch/near-real-time analytics and the distributed indexed querying system. Besides IT, he takes a keen interest in human and ecological issues.

Table of ContentsIntroduction to SparkJava for SparkLet’s SparkUnderstanding Spark Programming modelWorking with Data & storageSpark on ClusterSpark Programming Model - Advance conceptsWorking with Spark SQLNear real time processing with Spark StreamingMachine learning analytics with Spark MLlibLearning Spark GraphX

Customer Reviews