Post Install - LarryGF/terraHom GitHub Wiki
These are the steps you need to follow after you've finished the infrastructure deployment to configure some of the services.
Create a Discord Webhook:
- Go to your Discord server settings.
- Navigate to "Integrations" and then "Webhooks."
- Create a new webhook. Remember the URL it gives you; you will need this for configuration.
You can create as many webhooks as you like and you can split the notifications between several webhooks. You will use these webhooks for the notification of the different services.
- Go to https://rtorrent.{your domain}
- Create username and password and keep them in mind, you will need them later
- Paste the following path in the socket
/config/.local/share/rtorrent/rtorrent.sock/config/.local/share/rtorrent/rtorrent.sock
SABnzbd has a security feature where it prevents accessing it from anywhere but localhost unless a specific setting has been set, or credentials have been set, we will go for the last route for this particular config:
- Forward sabnzbd to your localhost:
kubectl port-forward -n services svc/sabnzbd 8090:8080
- Go to
http://localhost:8090/sabnzbd
- Finish the wizard
- In Config->General set the username and password
- You'll be able to access it from anywhere now
It's worth mentioning that usenet indexers and servers are two different things, indexers are added in Prowlarr while servers are added in SABnzbd. I particuarly recommend NZBGeek for indexer and FrugalUsenet for server.
- Jackett has been deprecated in favor of prowlarr, it is way easier to sync the indexers that way
- Go to
https://jackett.{your domain}/UI/Dashboard
- Add your desired indexers
- Don't close the page yet, since you'll need it to add indexers to Sonarr/Radarr
Please bear in mind that, after you have configured Radarr,Sonarr,Prowlarr
etc. they will generate an API key that you will use to configure the integration between the services, if you're just interested in making the services work, that's as far as you'd need to go, but there are additional steps if you want the homepage widgets to work.
- Take note of the API key for each service
- Add it to
terraform.tfvars
under theapi_keys
variable, it should look something like this:
api_keys = {
radarr_key = "radarr-key"
sonarr_key = "sonarr-key"
prowlarr_key = "prowlarr-key"
plex_key = "plex-key"
}
- Then you have to run again the applications' stack:
terraform apply -auto-approve -target module.argocd_application
-
Go to
https://prowlar.{your> domain}
-
I don't recommend you set up auth for this, it might lock you out of the app, and it's not worth the hassle
-
Add your desired indexers
- Go to the apps (#sonarr/radarr, #sabnzbd) and get their API keys by going to Settings -> General -> Security
-
Go to Settings -> Apps and add your deployed apps:
-
For all apps
Prowlar server
will be:http://prowlarr:9696
- For all apps the
App server
will be the name of the app, and you should get the port from the table below table:
Service Port radarr 7878 sonarr 8989 whisparr-radarr 6969 sabnzbd 8080 - For all apps the
-
For instance, for
sonarr
,App server
would behttp://sonarr:8989
-
-
After you've finished adding the apps you might want to trigger a
Sync app indexers
-
Go to Radarr/Sonarr:
https://{radarr/sonarr}.{your domain}
-
Go to Settings -> Media Management: set root folder to /downloads, that's where the
media
shared volume will be mounted -
Go to Settings -> Download Clients: Add a new
Flood
downloader:- Name: Flood
- Host:
flood-rtorrent-flood
- Port:
3000
- Set username and password from Flood step
-
Go to Settings -> Download Clients: Add a new
SABnzbd
downloader:- Name: Flood
- Host:
sabnzbd
- Port:
8080
- Set username and password from SABnzbd step
- You also need to add the SABnzbd api key from General-> API Key
-
Add new profiles to support other languages
-
If the categories don't match between Radarr/Sonarr and Prowlarr it might fail silently
-
Go to Settings -> Connect: Add a new Plex connection:
- Name: Plex
- Host:
plex
- Port:
32400
- Authenticate with plex.tv
-
Go to Settings -> Connect: Add a new Discord connection:
- Name: Plex
- Webhook URL: the webhook url you obtained in the Discord step
- Username:
Sonarr
/Radarr
- Avatar:
- For Sonarr use:
https://avatars.githubusercontent.com/u/1082903?s=200&v=4
- For Radarr use:
https://avatars.githubusercontent.com/u/25025331?s=200&v=4
- For Sonarr use:
-
Add subtitle providers in
Settings
>Providers
.- Enable preferred providers.
- Enter required details for each, including login information.
-
Create language profiles under
Settings
>Languages
.- Click
Add Profile
. - Configure the preferred languages and subtitle settings.
- Assign profiles to series and movies in Sonarr and Radarr.
- Click
-
Go to
Settings
>Sonarr
/Radarr
to connect Bazarr to Sonarr and Radarr.- Input your Sonarr/Radarr API key.
- Use
sonarr
orradarr
as your url. - Click
Test
to verify connectivity. - Save the settings.
set comics foler and comicvine api key enable api key for prowlarr
Adult content is provided using Whisparr, this won't be enabled by default, if you want to be able to download adult cotent, you will have to add the whisparr
config manually to the applications.yaml
-
Default user is
admin
and password isadmin123
it should be passed on thecalibre_web_key
key in theapi_keys
variable -
Run this command:
curl -L https://github.com/janeczku/calibre-web/raw/master/library/metadata.db -o metadata.db && kubectl cp metadata.db $(kubectl get pods -n services -l app.kubernetes.io/name=calibre-web -o jsonpath="{.items[0].metadata.name}"):downloads/calibre/metadata.db -n services && kubectl exec -n services $(kubectl get pods -n services -l app.kubernetes.io/name=calibre-web -o jsonpath="{.items[0].metadata.name}") -- chown 1001:1001 /downloads/calibre/metadata.db && rm metadata.db
-
Go to the calibre-web UI under
https://calibre.{your domain}
login using the credentials -
Go to: Admin -> Edit Basic Configuration -> Feature Configuration -> Enable Uploads, also Enable Anonymous browsing (this has to be enabled because keeps going 401 on /opds endpoint)
-
Go to the users page and grant upload/download permissions to the Guest user
-
The opds url is
https://calibre.{your domain}/opds
, you can use this in your Moon Reader
Kavita will automatically create folders under the default media
PVC, one for each type of library:
data/kavita/comics
data/kavita/manga
data/kavita/books
Once it's up and running you can access kavita and:
- Settings
- Libraries
- Add library
- Login to Plex by visiting:
https://plex.{your domain}
emby-> settings-> api key
ombi emby
- You need to select
Use your Jellyfin account
- Then, for the url:
http://jellyfin:8096
- Use your previously created Jellyfin account
- Add
Radarr
andSonarr
in the same way as you did with Prowlarr - Make sure to
test
first so you have access to the quality profiles - Set the root folder
- Don't forget to mark them as default
First you need to access Tautulli over the local port to configure it:
kubectl port-forward -n services svc/tautulli 8181:8181
Finish the setup config, login to plex, setup username and password, etc
- First create an account in Crowdsec
- In the crowdsec console,you can get the enrollment key by checking the key under
Enroll your CrowdSec Security Engine
- Deploy the helm chart by enabling it in
applications.yaml
- Once it's deployed accept the in the Crowdsec console
Before installation put some long, random values under:
api_keys = {
...
authelia_JWT_TOKEN = "unique_long_string"
authelia_SESSION_ENCRYPTION_KEY = "unique_long_string"
authelia_STORAGE_ENCRYPTION_KEY = "unique_long_string"
}
There is a bash script under scripts/create_authelia_users.sh
that takes a username, password and email and creates the config file that authelia will use for local user creation. This will output a file named users.config
in the path from where the script was run. For now you have to copy the contents of that file to terraform/modules/argocd_application/authelia/values.yaml
under:
authelia_users:
users:
{user_name}:
disabled: false
displayname: {user_name}
password:
email: {user_email}
groups:
- admins
- dev
At the moment authelia is configured to store notifications inside the pod, to view this you have to check /config/notification.txt
. In order to validate a user:
- Go to your authelia url
- Authenticate using your credentials
- Click on register
- Go to the authelia pod (either from ArgoCD's terminal feature or using
kubectl exec
) - There
cat /config/notification.txt
and follow the link shown
This was based on the brokenscript guide for configuring Traefik and Authentik in Docker.
- Visit
https://authentik.${domain}/if/flow/initial-setup/
to configure username and password - If you want to be able to login as Admin user from Google, make sure the email you set to the
akadmin
is your Google email - Go to
Admin Interface
->Applications
->Providers
->Create
- Select
Proxy Provider
- Then fill with the following details:
- Name: Domain Forward Auth Provider
- Select: Forward auth (domain level)
- Authentication URL:
https://authentik.${domain}
(shoud be automatically populated) - Cookie domain: ${domain}
- Select
- Go to
Admin Interface
->Applications
->Application
->Create
- Fill with the following details:
- Name:
Domain Forward Auth Application
. This is what appears on the Authentik user interface/dashboard - Slug: domain-forward-auth-application. An internal application name, name isn't super important
- Provider:
Domain Forward Auth Provider
. This should match the previously created provider - Launch URL: empty. Do NOT put anything in the Launch URL, it needs to autodetect since it's the catch all rule
- Name:
- Fill with the following details:
- Go to
Admin Interface
->Applications
->Outposts
- Click on the edit button next to
authentik Embedded Outpost
- Under
Integrations
selectLocal Kubernetes Cluster
- Click on the
Domain Forward Auth Application(Domain Forward Auth Provider)
- Click on
Update
- Click on the edit button next to
- This should create a
kubernetes service
namedak-outpost-authentik-embedded-outpost
and aTraefik middleware
namedak-outpost-authentik-embedded-outpost
. You can always search them by executing:
$ kubectl get -n authentik middlewares.traefik.io
NAME AGE
ak-outpost-authentik-embedded-outpost 12m
$ kubectl get -n authentik svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
authentik-redis-headless ClusterIP None <none> 6379/TCP 35m
authentik-redis-master ClusterIP 10.43.195.177 <none> 6379/TCP 35m
authentik-postgresql-headless ClusterIP None <none> 5432/TCP 35m
authentik-postgresql ClusterIP 10.43.27.225 <none> 5432/TCP 35m
authentik ClusterIP 10.43.223.253 <none> 9300/TCP,80/TCP 35m
ak-outpost-authentik-embedded-outpost ClusterIP 10.43.246.172 <none> 9000/TCP,9300/TCP,9443/TCP 23m
-
For enabling google auth follow this guide
-
For ease of use select
Link to a user with identical email address
inUser Matching mode
-
To use the Google login in your setup follow this guide
-
If you want to disable user self-service registration you can check here. Basically, this means, that, since SSO login can only map to an existing user with identical email address, and we deny new users the right to self register, an admin needs to manually create a user with a matching email to the google account that you want to allow. To achieve this under
Customisation
->Policies
->default-source-enrollment-if-sso
edit:# This policy ensures that this flow can only be used when the user # is in a SSO Flow (meaning they come from an external IdP) return False
-
To avoid showing the username prompt you can follow these steps. Basically Ctrl+click on the user fileds in the
default-authentication-identification
to deselect all of them and only select your Google source.
- Go to
Directory
->User
->Create
- Set the user information, make sure the email matches the Google Account for the new user you want to register
https://gist.github.com/bluewalk/7b3db071c488c82c604baf76a42eaad3 https://github.com/sfiorini/NordVPN-Wireguard
Go to Settings > Devices & Services and then click the Add Integration button. Use the search bar to look for "hacs". Click on HACS. Check everything (it’s optional) and click Submit. You will see a code. Note it down or copy it and click on the URL displayed. Sign in to your GitHub profile and paste or type the code. Click Continue. Click the Authorize HACS button.
After the argo-workflows app is installed it will require authentication. By default argo-workflows
uses the token of a previosuly existing ServiceAccount
to authenticate against the kubernetes API server, a read-only ServiceAccount
is created by default and its token is stored in a secret. In order to obtain the credentials needed to login to the UI you need to:
echo "Bearer $(kubectl get secrets -n gitops workflow-admin.service-account-token -o jsonpath="{.data.token}" | base64 -d)"
And then paste it in the UI under Client Authentication
(middle text field)
I chose to go with OneDrive as a backup solution, since it's free and it's easy to setup, also, if you have 2FA enabled in Mega it won't work, and it's not worth the risk to disable it. Haven't tried other solutions but this one works fine.
-
Follow these steps
-
If you click on
Test connection
and doesn't work, don't stress about it, it's probably a Duplicati thing, this doesn't necessarily means it won't work -
Once you've finished your configuration, run it and verify everything is working
-
To speed the process up, export that manual config and download it somewhere handy, you will have a
json
file somewhat similar to this:... "CreatedByVersion": "2.0.6.3", "Schedule": { "ID": 1, "Tags": [ "ID=3" ], "Repeat": "1D", "Rule": "AllowedWeekDays=Monday,Sunday", "AllowedDays": [ "mon", "sun" ] }, "Backup": { "ID": "3", "Name": "Sonarr", "Description": "", ... "TargetURL": "onedrivev2:///Duplicati/sonarr/(...)", "Sources": [ "/config/sonarr/" ], ... "DisplayNames": { "/config/sonarr/": "sonarr" } }
-
Since the only thing that should be different is the base path and the destination path, you can just quickly edit that same file and keep reuploading it to Duplicati until you have all your services backed up
-
All of the config folders for your services are mounted under
/config
in Duplicati by default -
Of course, you can alway be lazy and backup the entire /config folder, Duplicati config included, but you will loose graularity and control (although you might want to backup it etirely anyways in case Duplicati itself goes down)