Post

RClone as CSI on K3S

RClone as CSI on K3S


Introduction

RClone is a funny tool for jankily using Cloud Storage as a storage volume in mysterious and wacky ways. Ever wanted to use Google Drive as PVC and hate yourself for every moment of it? Here is the solution for you.

Prerequisites

  • A Kubernetes cluster running K3S, however this should work on any.
  • Kubectl installed and configured to talk to your cluster.
  • A Cloud Storage provider, I will be using Google Drive for this.

RClone Config Generation

First, your going to want to generate your rclone config, this is the file that defines the access keys, and storage configuration. You can download rclone from here.

After, you will want to run the following command to generate your config:

1
rclone config

This will walk you through setting, and configuring your storage provider.

additionally, you may want to add a crypt option, that will allow you to encrypt the data sent between you and your storage provider so they cannot see the contents of your PVCs. You can read more about it here.

K8S Provider Deployment

Your going to need a provider to setup additionally with the rclone-csi, for this I have chosen to use longhorn, however any provider should work. You can install longhorn here.

(TL;DR - kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/varchives/deploy/longhorn.yaml)

RClone CSI Deployment

First, your going to want to clone the CSI-Repository into a folder

1
git clone <https://github.com/wunderio/csi-rclone>

Now, you can deploy these files with the following command:

1
kubectl apply -f deploy/kubernetes/1.19/

Next, you’ll want to deploy your RClone CSI default config for the storage class.

Use this following yaml, and paste the rclone config from the previous step into the configData field. This file can be found under ~/.config/rclone/rclone.yaml. You’ll also want to edit the remote name into remote, if you have multiple and are using crypt/cache, you’ll want to place the name of the crypt remote into that field. You’ll also want to edit the remotePath field to be the name of the folder you want to use as the base folder for your PVC’s.

1
2
3
4
5
6
7
8
9
apiVersion: v1
kind: Secret
metadata:
  name: rclone-secret
type: Opaque
stringData:
  remote: "Your-Remote"
  remotePath: "/folder_here"
  configData: |

Then save and deploy that file via

1
kubectl apply -f rclone-secret.yaml -n kubesystem

To configure the PVC’s to use this, you must set it as default or add storageClassName: rclone to the PVC.

An example PVC is found here:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nextcloud-data
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
  storageClassName: rclone
  selector:
    matchLabels:
      name: nextcloud-data

PV Provisioning

One downside to the CSI, is that it does not have dynamic PV provisioning, meaning you need to setup a PV.

An example is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nextcloud-data
  labels:
    name: nextcloud-data
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 100Gi
  storageClassName: rclone
  csi:
    driver: csi-rclone
    volumeHandle: data-id # Needs to be different for every volume
    volumeAttributes:
      remotePath: "k8s/nextcloud" # Path in your RClone Mount

Conclusion

Unlimited GSuite storage goes brrrrrr. I wouldn’t recommend using this for anything important or accessed frequently(however the caching option might make this better), but its a funni, and lets me store infinite backups, logs, and other funni large files into PVC’s. The write performance can be pretty low, espesically if your writing lots of smaller files. However the read tends to be much better, but still is not great with smaller files

This post is licensed under CC BY 4.0 by the author.