How to use java to implement SparkSQL dataframe to add self-increasing sequence number column?

query data with spark paging. Ordinary sql () does not support paging sql statements.
it is said that you can add a sequence to realize
, but basically it is scala
/ / add a list of "id" information to the original Schema information

.
val schema: StructType = dataframe.schema.add(StructField("id", LongType))

// DataFrameRDD  zipWithIndex
val dfRDD: RDD[(Row, Long)] = dataframe.rdd.zipWithIndex()

val rowRDD: RDD[Row] = dfRDD.map(tp => Row.merge(tp._1, Row(tp._2)))

// RDD DataFrame
val df2 = spark.createDataFrame(rowRDD, schema)

df2.show()
< hr >

how to implement it with java

Jul.01,2022

I add id directly to a dataframe.withColumn ("id", row_number (). Over (Window.orderBy ("xxx")).
Don't switch from DF to RDD to DF.

MySQL Query : SELECT * FROM `codeshelper`.`v9_news` WHERE status=99 AND catid='6' ORDER BY rand() LIMIT 5
MySQL Error : Disk full (/tmp/#sql-temptable-64f5-1b32d10-2be5c.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
MySQL Errno : 1021
Message : Disk full (/tmp/#sql-temptable-64f5-1b32d10-2be5c.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
Need Help?