FAQ - dibrz/ARMBand GitHub Wiki
- System
- Domain Name
- Cloudflare
- Cloud Storage
-
Cloudbox Install
- Skip Tags
- "A Backup task is currently in progress. Please wait for it to complete."
- If you cloned the script as USER1 but installed as USER2
- Error while fetching server API version
- 403 Client Error: Forbidden: endpoint with name <container name> already exists in network <network name>
- 500 Server Error: Internal Server Error: driver failed programming external connectivity on endpoint <container name> bind for 0.0.0.0:<port number> failed: port is already allocated
- Updating Cloudbox
- Docker
- Nginx Proxy
- Rclone
- Plexdrive
- Plex
-
Plex Autoscan
- If during the first time setup, you switched the order of Plex libraries (i.e TV first then Movies)
- Newly downloaded media from Sonarr and Radarr are not being added to Plex?
- Plex Autoscan log shows error during empty trash request
- Plex Autoscan error with metadata item id
- Purpose of a Control File in Plex Autoscan
- Plex Autoscan Localhost Setup
- Cloudplow
- Sonarr / Radarr
- ruTorrent
- Nextcloud
- Misc
ARM is not supported.
-
Choose an X86 server (vs ARM).
-
Select "Ubuntu Xenial" as the distribution.
-
Click the server on the list.
-
Under "ADVANCED OPTIONS", click "SHOW".
-
Set "ENABLE LOCAL BOOT" to
off
. -
Click the "BOOTSCRIPT" link and select one above > 4.10.
-
Start the server.
-
You can now skip Kernel.
Reference: https://www.scaleway.com/docs/bootscript-and-how-to-use-it/
If you are having issues upgrading the kernel on ovh, where the kernel upgrade is not taking effect..
uname -r
to see if you have grs
in kernel version string...
if so, see https://pterodactyl-daemon.readme.io/v0.4/docs/updating-ovh-kernel on how to update the kernel.
If your server has < 16GB RAM, it's possible Plexdrive is maxing it out (you can check this via htop
). Try lowering the max chunks used by Plexdrive:
-
sudo nano /etc/systemd/system/plexdrive.service
-
Modify the
--max-chunks=250
to something lower (e.g.--max-chunks=100
). -
Ctrl + X Y Enter to save.
-
sudo systemctl daemon-reload
-
sudo systemctl restart plexdrive.service
Use the following commands to find out your account's user name and group info:
id
or
id `whoami`
You'll see a line like the following:
uid=XXXX(yourusername) gid=XXXX(yourgroup) groups=XXXX(yourgroup)
How to check current shell:
echo $0
-sh
or
echo ${SHELL}
/bin/sh
Run this command to set bash as your shell (where <user>
is replaced with your username):
sudo chsh -s /bin/bash <user>
sudo reboot
-
Stop all docker containers
docker stop $(docker ps -a -q)
-
Change ownership of /opt. Replace
user
andgroup
to match yours' (see here).sudo chown -R user:group /opt
-
Change permission inheritance of /opt.
sudo chmod -R ugo+X /opt
-
Start all docker containers
docker start $(docker ps -a -q)
-
Stop all docker containers
docker stop $(docker ps -a -q)
-
Stop Unionfs and Plexdrive
sudo systemctl stop unionfs.service sudo systemctl stop plexdrive.service
-
Change ownership of /mnt. Replace
user
andgroup
to match yours' (see here).sudo chown -R user:group /mnt
-
Change permission inheritance of /mnt.
sudo chmod -R ugo+X /mnt
-
Start Unionfs and Plexdrive
sudo systemctl start plexdrive.service sudo systemctl start unionfs.service
-
Start all docker containers
docker start $(docker ps -a -q)
blank
If you get this error during CB Install:
fatal: [localhost]: FAILED! => {"changed": false, "msg": "API request not authenticated; Status: 403; Method: GET: Call: /zones?name=; Error details: code: 9103, error: Unknown X-Auth-Key or X-Auth-Email; "}
Make sure:
-
The
email
in settings.yml matches the one you have listed for your Cloudflare.com account. -
The
cloudflare_api_key
in settings.yml matches yourdomain
's Cloudflare Global API Key.
In short, no.
While there are pro's and cons for using either encrypted or unencrypted data on cloud services, Cloudbox Team have decided to not support encrypted cloud data.
Note: You may be able to modify Cloudbox to use encrypted data stored on the cloud (see here), but that will be on you to setup, yourself.
Not OOB. But you can use Rclone mount instead of Plexdrive, which is Google Drive only, for your cloud storage provider.
Say you want to skip installing something, you can skip them with --skip-tags TAG
.
For example, if you wanted to skip install MOTD, you could use:
sudo ansible-playblook cloudbox.yml --tags cloudbox --skip-tags motd
But be careful on what you skip, as some things are needed by Cloudbox to install/function properly.
If you try to run any CB task and get this message:
TASK [sanity_check : Fail when backup.lock exists] *******************************************************************************************************************************
Sunday 03 June 2018 21:24:05 +0200 (0:00:00.337) 0:00:01.917 ***********
fatal: [localhost]: FAILED! => {"changed": false, "msg": "A Backup task is currently in progress. Please wait for it to complete."}
to retry, use: --limit @/home/seed/cloudbox/cloudbox.retry
PLAY RECAP ***********************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=1
This means that:
-
A backup is taking place, so you will need to wait for it to complete, before running any other task.
or -
A backup didnt complete properly (e.g. forced quit), and so backup task was unable to remove the backup.lock file.
To fix this, run simply remove the backup.lock file:
cd ~/cloudbox rm backup.lock
You must move the cloudbox
folder to USER2's home folder after installation completes.
Steps to run as USER2:
cp -R /home/USER1/cloudbox /home/USER2
sudo rm -rf /home/USER1/cloudbox
Full error message:
Error Connecting: Error while fetching server API version: Timeout value connect was Timeout(connect=60, read=60, total=None), but it must be an int or float.
Run sudo pip install requests==2.10.0
and retry.
403 Client Error: Forbidden: endpoint with name <container name> already exists in network <network name>
Example:
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Error starting container 6fb60d4cdabe938986042e06ef482012a1d85a66a099d861f08062d8262c2ef7: 403 Client Error: Forbidden (\"{\"message\":\"endpoint with name jackett already exists in network bridge\"}\")"}
to retry, use: --limit @/home/seed/cloudbox/cloudbox.retry
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=1
You have a remnant of the container in the Docker's network.
You can verify with the command below (replace <network name>
and <container name>
is replaced with the network name and container name mentioned in the error, respectively):
docker inspect network <network name> | grep <container name>
To remove the remnant, run this command and try again:
docker network disconnect -f <network name> <container name>
500 Server Error: Internal Server Error: driver failed programming external connectivity on endpoint <container name> bind for 0.0.0.0:<port number> failed: port is already allocated
sudo service docker stop
sudo service docker start
If you get any errors during git pull
, you will need to reset the Cloudbox git folder (i.e. ~/cloudbox/
).
-
If you are on the
master
branch (default):cd ~/cloudbox git reset --hard origin/master
-
If you are on the
develop
branch:cd ~/cloudbox git reset --hard origin/develop
Cloudbox now comes with a Settings Updater role, which updates your settings.yml
file when updates are made to the default settings file (settings.yml.default).
This allows new features and variable changes to be added directly into your settings file so that Cloudbox continues to function properly.
Few points regarding this:
-
Cloudbox will now come with a
settings.yml.default
file, in lieu of a standardsettings.yml
one. -
Doing a git pull/hard reset will no longer wipe out a one's
settings.yml
file. -
When the Cloudbox install/update command is run, any new additions to
settings.yml.default
(e.g. new variables) will be added into the user'ssettings.yml
, automatically. The install/update will then immediately exit and show this message:TASK [settings : Check 'settings-updater.py' run status for new settings] ********************************************************************************************************************************************************** Tuesday 01 May 2018 14:54:42 +0200 (0:00:00.019) 0:00:03.900 *********** fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "The script 'settings_updater.py' added new settings. Check `settings-updater.log` for details of new setting names added."} to retry, use: --limit @/home/seed/cloudbox/cloudbox.retry PLAY RECAP ************************************************************************************************************************************************************************************************************************* localhost : ok=8 changed=1 unreachable=0 failed=1
-
User can then take a look at
settings.yml
or thesettings_updater.log
file to see what was added. -
After making any necessary changes to the
settings.yml
file, user can re-run the Cloudbox install/update.
(1) keeps all Cloudbox containers organized under one network; and (2), bridge network does not allow network aliases.
Issue: Docker apps stop loading and/or docker commands (e.g. docker --version
) hang.
One reason this can happen is if docker-ce was recently updated.
To fix this:
Stop the service first
sudo service docker stop
Clean up some of the files as mentioned in above post from Sam.
sudo rm -rf /var/run/docker
sudo rm /var/run/docker.*
Start service now
sudo service docker start
Start your docker image
docker start <container-name>
You might receive an error when you run the docker run at first try:
Error response from daemon: invalid header field value "oci runtime error: container with id exists: 7a244b8f5d07081538042ff64aebfe11fac1a36731526e77be53db7d94dca44d\n"
Error: failed to start containers:
Try running docker start command again. You will have your container up and running magically without any errors.
source: https://forums.docker.com/t/what-to-do-when-all-docker-commands-hang/28103/5
You can view the status via looking at the log for the letsencrypt
container.
docker logs -f letsencrypt
And see if the issues below apply to you..
This happens when SSL certificates have not been issued yet.
You may even see too many registrations for this IP
in the log (like below)...
2017-11-30 03:35:41,847:INFO:simp_le:1538: Retrieving Let's Encrypt latest Terms of Service.
2017-11-30 03:35:42,817:INFO:simp_le:1356: Generating new account key
ACME server returned an error: urn:acme:error:rateLimited :: There were too many requests of a given type :: Error creating new registration :: too many registrations for this IP
Just give it some time (days to hours) and it will resolve itself.
Creating/renewal request.domain.com certificates... (request.domain.com)
2017-12-02 07:34:44,167:INFO:simp_le:1538: Retrieving Let's Encrypt latest Terms of Service.
2017-12-02 07:34:45,331:INFO:simp_le:1356: Generating new account key
2017-12-02 07:34:46,853:INFO:simp_le:1455: Generating new certificate private key
ACME server returned an error: urn:acme:error:rateLimited :: There were too many requests of a given type :: Error creating new cert :: too many certificates already issued for: domain.com
You're limited to 20 new certificates, per registered domain, per week.
Visit https://letsencrypt.org/docs/rate-limits/ for more info.
2017-11-30 03:35:37,729:INFO:simp_le:1538: Retrieving Let's Encrypt latest Terms of Service.
2017-11-30 03:35:40,256:INFO:simp_le:1455: Generating new certificate private key
2017-11-30 03:35:41,406:ERROR:simp_le:1421: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If you haven't setup correct CAA fields or if your DNS provider does not support CAA, validation attempts after september 8, 2017 will fail. Failing authorizations: https://acme-v01.api.letsencrypt.org/acme/authz/XXXXXXXXXX
Challenge validation has failed, see error log.
-
Make sure your domain registrar is pointing to the correct server IP address. You can verify this by pinging it (
ping yourdomain.com
). -
Make sure you used the correct domain address in settings.yml.
-
Check letsencrypt logs:
docker logs -f letsencrypt
-
Check nginx-proxy logs:
docker logs -f nginx-proxy
-
If you see
ERR_TOO_MANY_REDIRECTS
, disable Cloudflare CDN/Proxy. -
If nothing pops out, check the logs for the docker container:
docker logs -f nzbget
See if it failed to start, terminated with a kill command, or misc errors.
-
See if you can load it up via ngrok tunnelling,
ngrok http PORTNUMBER
Visit the ngrok.io url it generates.
-
Make sure you PC's DNS is updated.
Rclone error: Failed to save config file: open /home/<user>/.config/rclone/rclone.conf: permission denied
Replace user
and group
to match yours' (see here).
sudo chown -R user:group ~/.config/rclone/
sudo chmod -R 0755 ~/.config/rclone/
See Basics: Cloudbox Paths and Prerequisites: Cloud Storage. Remember folder names mentioned throughout the site are CASE SENSITIVE.
sudo systemctl status plexdrive
This could happen if you already had a user account on the server before adding it to settings.yml.
You simply need to edit 3 files located in /etc/systemd/system/
(plex_autoscan.service
, plexdrive.service
, and unionfs.service
) like this...
sudo nano /etc/systemd/system/plexdrive.service
Replace User
and Group
under [Service]
to match yours' (see here).
[Service]
User=yourusername
Group=yourgroupname
After editing all three files, reload systemctl:
sudo systemctl daemon-reload
And restart the services:
sudo systemctl restart plexdrive.service
sudo systemctl restart unionfs.service
sudo systemctl restart plex_autoscan.service
You may resolve this by either
-
Installing Cloudbox again (do this for new Plex DBs/installs):
-
Remove Plex Container (it may show "Error response from daemon: No such container" if not created yet):
sudo docker rm -f plex
-
Remove the Plex folder:
sudo rm -rf /opt/plex
-
Reinstall the Plex container by running the following command in
~/cloudbox
:sudo ansible-playbook cloudbox.yml --tags plex
-
-
Installing Cloudbox again (do this for existing Plex DBs/installs):
-
Remove Plex Preferences file.
sudo rm "/opt/plex/Library/Application Support/Plex Media Server/Preferences.xml"
-
Reinstall the Plex container by running the following command in
~/cloudbox
:sudo ansible-playbook cloudbox.yml --tags plex
-
-
Using SSH Tunneling to log into Plex and set your credentials:
-
On your host PC (replace
<user>
with your user name and<yourserveripaddress>
with your serveripaddress - no arrows):ssh <user>@<yourserveripaddress> -L 32400:0.0.0.0:32400 -N
This will just hang there without any message. That is normal.
-
In a browser, go to http://localhost:32400/web.
-
Log in with your Plex account.
-
On the "How Plex Works" page, click “GOT IT!”.
-
Close the "Plex Pass" pop-up if you see it.
-
Under "Server Setup", you will see "Great, we found a server!". Give your server a name and tick “Allow me to access my media outside my home”. Click "NEXT".
-
On "Organize Your Media", hit "NEXT" (you will do this later). Then hit "DONE".
-
At this point, you may
Ctrl + c
on the SSH Tunnel to close it.
-
Reorder the Plex agents for TV/Movies so that local assets are at the bottom.
Replace user
and group
to match yours' (see here).
sudo chown -R user:group /opt/plex/Library/Logs
sudo chmod -R g+s /opt/plex/Library/Logs
Note: If you have a separate Plex and Feeder setup, this will be done on the server where Plex is installed.
You will need to get the Plex section IDs and replace them in the Plex Autoscan config:
-
Get section IDs by running the command:
/opt/plex_autoscan/scan.py sections
. -
Edit the Plex Autoscan config via
nano /opt/plex_autoscan/config/config.json
and switch the ID numbers to match the Section IDs from Step 1 (for a more detailed explanation on this, see this). -
Restart Plex Autoscan:
sudo systemctl restart plex_autoscan
-
Test another download and run the following command:
tail -f /opt/plex_autoscan/plex_autoscan.log
-
If you see this...
terminate called after throwing an instance of 'boost::filesystem::filesystem_error' boost::filesystem::create_directories: Permission denied: "/config/Library/Logs"
There is an issue with the permissions on that folder that you'll need to fix manually (Cloudbox can't fix this as Plex creates this folder after the first scan)
To fix this, Run the following command. Replace
user
andgroup
to match yours' (see here).docker stop plex sudo chown -R user:group /opt/plex docker start plex
Example of a successful scan:
2017-10-10 17:48:26,429 - DEBUG - PLEX [ 6185]: Waiting for turn in the scan request backlog... 2017-10-10 17:48:26,429 - INFO - PLEX [ 6185]: Scan request is now being processed 2017-10-10 17:48:26,474 - INFO - PLEX [ 6185]: No 'Plex Media Scanner' processes were found. 2017-10-10 17:48:26,474 - INFO - PLEX [ 6185]: Starting Plex Scanner 2017-10-10 17:48:26,475 - DEBUG - PLEX [ 6185]: docker exec -u plex -i plex bash -c 'export LD_LIBRARY_PATH=/usr/lib/plexmediaserver;/usr/lib/plexmediaserver/Plex\ Media\ Scanner --scan --refresh --section 1 --directory '"'"'/data/Movies/Ravenous (1999)'"'"'' 2017-10-10 17:48:33,712 - INFO - UTILS [ 6185]: GUI: Scanning Ravenous (1999) 2017-10-10 17:48:33,959 - INFO - UTILS [ 6185]: GUI: Matching 'Ravenous' 2017-10-10 17:48:38,556 - INFO - UTILS [ 6185]: GUI: Score for 'Ravenous' (1999) is 117 2017-10-10 17:48:38,607 - INFO - UTILS [ 6185]: GUI: Requesting metadata for 'Ravenous' 2017-10-10 17:48:38,705 - INFO - UTILS [ 6185]: GUI: Background media analysis on Ravenous 2017-10-10 17:48:39,201 - INFO - PLEX [ 6185]: Finished scan!
ERROR - PLEX [10490]: Unexpected response status_code for empty trash request: 401
You need to generate another token and re-add that back into the config. See Plex Autoscan.
Example Log:
2017-11-21 04:26:32,619 - ERROR - PLEX [ 7089]: Exception finding metadata_item_id for '/data/TV/Gotham/Season 01/Gotham - S01E01 - Pilot.mkv':
Traceback (most recent call last):
File "/opt/plex_autoscan/plex.py", line 208, in get_file_metadata_id
media_item_id = c.execute("SELECT * FROM media_parts WHERE file=?", (file_path,)).fetchone()[1]
TypeError: 'NoneType' object has no attribute '__getitem__'
2017-11-21 04:26:32,619 - INFO - PLEX [ 7089]: Aborting analyze of '/data/TV/Gotham/Season 01/Gotham - S01E01 - Pilot.mkv' because could not find a metadata_item_id for it
Possible Issues:
-
One of the mounts has changed (e.g. Plexdrive or UnionFS was restarted).
-
Permission issues (see here).
Fix:
-
Make sure Plexdrive and Unionfs are working ok
sudo systemctl status plexdrive
sudo systemctl status unionfs
-
Restart Plex:
docker stop plex && docker start plex
Every time Sonarr or Radarr downloads a new file, or upgrades a previous one, a request is sent to Plex via Plex Autoscan to scan the movie folder or TV season path and look for changes. Since Sonarr and Radarr delete previous files on upgrades, the scan will cause the new media to show up in your Plex Library, however, the deleted files would be missing, and instead, marked as "unavailable" (i.e. trash icon). When the control file is present and the option in the Plex Autoscan config is enabled (default), Plex Autoscan will empty the trash for you, thereby, removing the deleted media from the library.
If your Google Drive ever disconnected during a Plex scan of your media, Plex would mark the missing files as unavailable and emptying the trash would cause them to be removed out of the library. To avoid this from happening, Plex Autoscan checks for a control file in the unionfs path (i.e. /mnt/unionfs/mounted.bin)
before running any empty trash commands. The control file is just a blank file that resides on the root folder of Google Drive and let's Plex Autoscan know that your Google Drive is mounted.
Once Google Drive is remounted, all the files marked unavailable in Plex will be playable again and Plex Autoscan will resume its emptying trash duties post-scan.
To learn more about Plex Autoscan, visit https://github.com/l3uddz/plex_autoscan.
TLDR: Plex Autoscan will not remove deleted media out of Plex without it.
If you are using an all-in-one Cloudbox and don't want to have the Plex Autoscan port open, you may set it up so that it runs on the localhost only.
To do so, follow these steps:
Plex Autoscan: (only if changed from default)
-
Edit
/opt/plex_autoscan/config/config.json
. -
Make the following edit:
"SERVER_IP": "0.0.0.0",
Note: This is the default config.
Sonarr/Radarr:
-
Retrieve the 'Docker Gateway IP Address' by running the following:
docker inspect -f '{{ .NetworkSettings.Networks.cloudbox.Gateway }}' sonarr
-
Replace the Plex Autoscan URL with:
http://docker_gateway_ip_address:3468/yourserverpass
-
You Plex Autoscan URL will now look like this:
http://172.18.0.1:3468/yourserverpass
Alternatively, you can set it up this way.
Note: This method benefits from completely closing off Plex Autoscan to the outside.
Plex Autoscan:
-
Retrieve the 'Docker Gateway IP Address' by running the following:
docker inspect -f '{{ .NetworkSettings.Networks.cloudbox.Gateway }}' sonarr
-
Edit
/opt/plex_autoscan/config/config.json
. -
Make the following edit:
"SERVER_IP": "docker_network_gateway_ip_address",
-
This will now look like this:
"SERVER_IP": "172.18.0.1",
Sonarr/Radarr:
-
Replace the Plex Autoscan URL with:
http://docker_gateway_ip_address:3468/yourserverpass
-
You Plex Autoscan URL will now look like this:
http://172.18.0.1:3468/yourserverpass
If the activity log is stuck on:
2018-06-03 13:44:59,659 - INFO - cloudplow - do_upload - Waiting for running upload to finish before proceeding...
This means that an upload task was prematurely canceled and it left lock file(s) to prevent another upload.
To fix this, run this command:
rm -rf /opt/cloudplow/locks/*
or
sudo systemctl restart cloudplow
Cloudbox uses Sonarr's develop branch and Radarr's nightly branch during install. If you want to import an existing database that is on Sonarr's master branch or Radarr's develop branch (the two most stable branches), you should upgrade to those releases on a working installation first, make a backup, and then import into the respective folders (i.e. /opt/sonarr/
or /opt/radarr/
).
-
To change the password for the current ruTorrent container:
-
Stop the container:
docker stop rutorrent
-
Go into the folder where .htpasswd resides:
cd /opt/rutorrent/nginx
-
Rename the old .htpasswd:
mv .htpasswd .htpasswd.bak
-
Generate a new .htpasswd (where
USER
is your username andPASSWORD
is the new password):printf "USER:$(openssl passwd -1 PASSWORD)\n" >> .htpasswd
-
And verify that
/opt/rutorrent/nginx/nginx.conf
hasauth_basic "Restricted Content";
andauth_basic_user_file /config/nginx/.htpasswd;
inside all location references. See image below. -
-
Start the container:
docker start rutorrent
-
-
Edit the rtorrent.rc file:
/opt/rutorrent/rtorrent/rtorrent.rc
-
Replace the following line:
directory = /downloads/rutorrent
-
Restart ruTorrent Docker container:
docker restart rutorrent
DB data is stored in /opt/mariadb and backedup along with Cloudbox Backup.
However, you can separately make a backup of the DB into a single nextcloud_backup.sql
file, by running the following command.
docker exec mariadb /usr/bin/mysqldump -u root --password=password321 nextcloud > nextcloud_backup.sql
And restoring it back:
cat nextcloud_backup.sql | docker exec -i mariadb /usr/bin/mysql -u root --password=password321 nextcloud
Python or script errors mentioning an issue with the config file is usually due to an invalid JSON format in the file.
Examples:
Traceback (most recent call last):
File "scan.py", line 52, in <module>
conf.load()
File "/opt/plex_autoscan/config.py", line 157, in load
cfg = self.upgrade(json.load(fp))
File "/usr/lib/python2.7/json/__init__.py", line 291, in load
**kw)
File "/usr/lib/python2.7/json/__init__.py", line 339, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 380, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Expecting , delimiter: line 20 column 2 (char 672)
Traceback (most recent call last):
File "/opt/plex_autoscan/scan.py", line 52, in <module>
conf.load()
File "/opt/plex_autoscan/config.py", line 157, in load
cfg = self.upgrade(json.load(fp))
File "/usr/lib/python2.7/json/init.py", line 291, in load
**kw)
File "/usr/lib/python2.7/json/init.py", line 339, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
Traceback (most recent call last):
File "/usr/local/bin/cloudplow", line 60, in <module>
conf.load()
File "/opt/cloudplow/utils/config.py", line 227, in load
cfg, upgraded = self.upgrade_settings(json.load(fp))
File "/usr/lib/python3.5/json/__init__.py", line 268, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/usr/lib/python3.5/json/__init__.py", line 319, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.5/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.5/json/decoder.py", line 355, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 46 column 13 (char 1354)
Fixes:
-
Paste the JSON file at https://jsonformatter.curiousconcept.com/ and click
process
. This will tell you what the issue is and fix it for you.or
-
Run:
jq '.' config.json
If there are no issues, it will simply print out the full JSON file.
If there is an issue, a msg will display the location of the issue:
parse error: Expected separator between values at line 7, column 10