pyspark.sql.DataFrame.coalesce#

DataFrame.coalesce(numPartitions)[source]#

Returns a new DataFrame that has exactly numPartitions partitions.

Similar to coalesce defined on an RDD, this operation results in a narrow dependency, e.g. if you go from 1000 partitions to 100 partitions, there will not be a shuffle, instead each of the 100 new partitions will claim 10 of the current partitions. If a larger number of partitions is requested, it will stay at the current number of partitions.

However, if you’re doing a drastic coalesce, e.g. to numPartitions = 1, this may result in your computation taking place on fewer nodes than you like (e.g. one node in the case of numPartitions = 1). To avoid this, you can call repartition(). This will add a shuffle step, but means the current upstream partitions will be executed in parallel (per whatever the current partitioning is).

New in version 1.4.0.

Changed in version 3.4.0: Supports Spark Connect.

Parameters
numPartitionsint

specify the target number of partitions

Returns
DataFrame

Examples

>>> from pyspark.sql import functions as sf
>>> spark.range(0, 10, 1, 3).select(
...     sf.spark_partition_id().alias("partition")
... ).distinct().sort("partition").show()
+---------+
|partition|
+---------+
|        0|
|        1|
|        2|
+---------+
>>> from pyspark.sql import functions as sf
>>> spark.range(0, 10, 1, 3).coalesce(1).select(
...     sf.spark_partition_id().alias("partition")
... ).distinct().sort("partition").show()
+---------+
|partition|
+---------+
|        0|
+---------+