pyspark.sql.functions.regr_sxx#
- pyspark.sql.functions.regr_sxx(y, x)[source]#
- Aggregate function: returns REGR_COUNT(y, x) * VAR_POP(x) for non-null pairs in a group, where y is the dependent variable and x is the independent variable. - New in version 3.5.0. - Parameters
- Returns
- Column
- REGR_COUNT(y, x) * VAR_POP(x) for non-null pairs in a group. 
 
 - Examples - Example 1: All pairs are non-null - >>> import pyspark.sql.functions as sf >>> df = spark.sql("SELECT * FROM VALUES (1, 1), (2, 2), (3, 3), (4, 4) AS tab(y, x)") >>> df.select(sf.regr_sxx("y", "x")).show() +--------------+ |regr_sxx(y, x)| +--------------+ | 5.0| +--------------+ - Example 2: All pairs’ x values are null - >>> import pyspark.sql.functions as sf >>> df = spark.sql("SELECT * FROM VALUES (1, null) AS tab(y, x)") >>> df.select(sf.regr_sxx("y", "x")).show() +--------------+ |regr_sxx(y, x)| +--------------+ | NULL| +--------------+ - Example 3: All pairs’ y values are null - >>> import pyspark.sql.functions as sf >>> df = spark.sql("SELECT * FROM VALUES (null, 1) AS tab(y, x)") >>> df.select(sf.regr_sxx("y", "x")).show() +--------------+ |regr_sxx(y, x)| +--------------+ | NULL| +--------------+ - Example 4: Some pairs’ x values are null - >>> import pyspark.sql.functions as sf >>> df = spark.sql("SELECT * FROM VALUES (1, 1), (2, null), (3, 3), (4, 4) AS tab(y, x)") >>> df.select(sf.regr_sxx("y", "x")).show() +-----------------+ | regr_sxx(y, x)| +-----------------+ |4.666666666666...| +-----------------+ - Example 5: Some pairs’ x or y values are null - >>> import pyspark.sql.functions as sf >>> df = spark.sql("SELECT * FROM VALUES (1, 1), (2, null), (null, 3), (4, 4) AS tab(y, x)") >>> df.select(sf.regr_sxx("y", "x")).show() +--------------+ |regr_sxx(y, x)| +--------------+ | 4.5| +--------------+