About 14,500 results
Open links in new tab
  1. Apache Spark™ - Unified Engine for large-scale data analytics

    Apache Spark is a multi-language engine for executing data engineering, data science, and machine learning on single-node machines or clusters.

  2. Overview - Spark 4.1.0 Documentation

    Scala and Java users can include Spark in their projects using its Maven coordinates and Python users can install Spark from PyPI. If you’d like to build Spark from source, visit Building Spark.

  3. Quick Start - Spark 4.1.0 Documentation

    Spark’s shell provides a simple way to learn the API, as well as a powerful tool to analyze data interactively. It is available in either Scala (which runs on the Java VM and is thus a good way to use …

  4. Documentation | Apache Spark

    Apache Spark™ Documentation Setup instructions, programming guides, and other documentation are available for each stable version of Spark below: Spark Spark 4.1.0

  5. Examples - Apache Spark

    Spark allows you to perform DataFrame operations with programmatic APIs, write SQL, perform streaming analyses, and do machine learning. Spark saves you from learning multiple frameworks …

  6. Spark SQL and DataFrames - Spark 4.1.0 Documentation

    Spark SQL is a Spark module for structured data processing. Unlike the basic Spark RDD API, the interfaces provided by Spark SQL provide Spark with more information about the structure of both …

  7. Getting Started — PySpark 4.1.0 documentation - Apache Spark

    There are more guides shared with other languages such as Quick Start in Programming Guides at the Spark documentation. There are live notebooks where you can try PySpark out without any other step:

  8. SQL Reference - Spark 4.1.0 Documentation

    Spark SQL is Apache Spark’s module for working with structured data. This guide is a reference for Structured Query Language (SQL) and includes syntax, semantics, keywords, and examples for …

  9. Spark SQL & DataFrames | Apache Spark

    Spark SQL includes a cost-based optimizer, columnar storage and code generation to make queries fast. At the same time, it scales to thousands of nodes and multi hour queries using the Spark …

  10. API Reference — PySpark 4.1.0 documentation - Apache Spark

    Note Spark SQL, Pandas API on Spark, Structured Streaming, and MLlib (DataFrame-based) support Spark Connect.