pyspark.pandas.DataFrame.std#

DataFrame.std(axis=None, skipna=True, ddof=1, numeric_only=None)#

Return sample standard deviation.

New in version 3.3.0.

Parameters
axis: {index (0), columns (1)}

Axis for the function to be applied on.

skipna: bool, default True

Exclude NA/null values when computing the result.

Changed in version 3.4.0: Supported including NA/null values.

ddof: int, default 1

Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.

Changed in version 3.4.0: Supported including arbitary integers.

numeric_only: bool, default None

Include only float, int, boolean columns. False is not supported. This parameter is mainly for pandas compatibility.

Returns
std: scalar for a Series, and a Series for a DataFrame.

Examples

>>> df = ps.DataFrame({'a': [1, 2, 3, np.nan], 'b': [0.1, 0.2, 0.3, np.nan]},
...                   columns=['a', 'b'])

On a DataFrame:

>>> df.std()
a    1.0
b    0.1
dtype: float64
>>> df.std(ddof=2)
a    1.414214
b    0.141421
dtype: float64
>>> df.std(axis=1)
0    0.636396
1    1.272792
2    1.909188
3         NaN
dtype: float64
>>> df.std(ddof=0)
a    0.816497
b    0.081650
dtype: float64

On a Series:

>>> df['a'].std()
1.0
>>> df['a'].std(ddof=0)
0.816496580927726
>>> df['a'].std(ddof=-1)
0.707106...