The kernel recursive least squares (KRLS or ALD-KRLS) algorithm, proposed by Engel et al. [1] in 2004, is a model inspired by the recursive least squares (RLS) approach. One of the main advantages of KRLS is that it performs inner operations in the high-dimensional Hilbert space. Consequently, this model can accurately approximate nonlinear systems with moderate computational cost. There are some types of kernel, but the KRLS approach implements the Gaussian kernel, described as follows:
- Gaussian kernel:
\begin{equation}\label{kernel}
\kappa\langle x^{i},x^{j} \rangle = exp\left(-\frac{\left\|x^{i} - x^{j}\right\|^2}{2\sigma^{2}}\right)
\end{equation}
where $\sigma$ is the kernel width and controls the linearity of the model.
KRLS needs a dictionary to store the samples to implement the kernel method. However, the computational cost would increase substantially to keep all samples in the dictionary. For this reason, KRLS uses sparsification to select the most relevant samples to be stored in the dictionary. The KRLS implementation in python can be seen in the link: https://github.com/kaikerochaalves/KRLS.git.
Some variations of KRLS were also proposed in the literature:
- SW-KRLS (sliding-window kernel recursive least squares)
- EX-KRLS (extended kernel recursive least squares)
- FB-KRLS (fixed-budget kernel recursive least squares)
- KRLS-T (kernel recursive least squares tracker)
- QKRLS (quantized kernel recursive least squares)
- ADA-KRLS (adaptive dynamic adjustment kernel recursive least squares)
- ANS-QKRLS (adaptive normalized sparse quantized kernel recursive least squares)
- QALD-KRLS (combination of ALD-KRLS and QKRLS algorithms)
Among the advantages of KRLS, the following can be highlighted:
- KRLS can produce accurate results when dealing with complex and non-linear data.
- KRLS has a moderate computational complexity.
Among the disadvantages of KRLS, the following can be highlighted:
- KRLS is not very suitable for non-stationary environments.
- There is no technique to optimize the parameter $\sigma$ in the Gaussian kernel.
REFERENCES
[1] Engel, S. Mannor, R. Meir, The kernel recursive least-squares algorithm,
IEEE Transactions on Signal Processing 52 (8) (2004) 2275–2285.
https://doi.org/10.1109/tsp.2004.830985
- Gaussian kernel:
\begin{equation}\label{kernel}
\kappa\langle x^{i},x^{j} \rangle = exp\left(-\frac{\left\|x^{i} - x^{j}\right\|^2}{2\sigma^{2}}\right)
\end{equation}
where $\sigma$ is the kernel width and controls the linearity of the model.
KRLS needs a dictionary to store the samples to implement the kernel method. However, the computational cost would increase substantially to keep all samples in the dictionary. For this reason, KRLS uses sparsification to select the most relevant samples to be stored in the dictionary. The KRLS implementation in python can be seen in the link: https://github.com/kaikerochaalves/KRLS.git.
Some variations of KRLS were also proposed in the literature:
- SW-KRLS (sliding-window kernel recursive least squares)
- EX-KRLS (extended kernel recursive least squares)
- FB-KRLS (fixed-budget kernel recursive least squares)
- KRLS-T (kernel recursive least squares tracker)
- QKRLS (quantized kernel recursive least squares)
- ADA-KRLS (adaptive dynamic adjustment kernel recursive least squares)
- ANS-QKRLS (adaptive normalized sparse quantized kernel recursive least squares)
- QALD-KRLS (combination of ALD-KRLS and QKRLS algorithms)
Among the advantages of KRLS, the following can be highlighted:
- KRLS can produce accurate results when dealing with complex and non-linear data.
- KRLS has a moderate computational complexity.
Among the disadvantages of KRLS, the following can be highlighted:
- KRLS is not very suitable for non-stationary environments.
- There is no technique to optimize the parameter $\sigma$ in the Gaussian kernel.
REFERENCES
[1] Engel, S. Mannor, R. Meir, The kernel recursive least-squares algorithm,
IEEE Transactions on Signal Processing 52 (8) (2004) 2275–2285.
https://doi.org/10.1109/tsp.2004.830985