Can UDF in sparkSQL use query statements to pass parameters

used sparkSQL to write a UDF, to calculate commission based on performance, but now the demand has changed. The percentage of commission is determined according to position and department. Here"s an example

spark.udf.register("mmyjtc", (yj: Float) => {
  (yj) match {
    case v if (v <= 6000) => {
      v * 0.10
    }
    case v if (v > 6000 && v <= 10000) => {
      v * 0.13
    }
    case v if (v > 10000 && v <= 20000) => {
      v * 0.20
    }
    case v => -1.0
  }
})

this is the upper and lower bounds of the previous UDF, now (0pc6000], (6000d10, 000), (10000) and 20000]. The upper and lower bounds of these intervals need to be obtained through SQL query, and the commission ratio of 0.10, 0.13, 0.20 also needs to be obtained through SQL query. Can UDF still be used? Or is there any alternative?

Mar.21,2021

is very simple. Just add parameters to the UDF

.
MySQL Query : SELECT * FROM `codeshelper`.`v9_news` WHERE status=99 AND catid='6' ORDER BY rand() LIMIT 5
MySQL Error : Disk full (/tmp/#sql-temptable-64f5-1b37014-2c051.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
MySQL Errno : 1021
Message : Disk full (/tmp/#sql-temptable-64f5-1b37014-2c051.MAI); waiting for someone to free some space... (errno: 28 "No space left on device")
Need Help?