Getting the Configuration Files - l3v11/SearchX GitHub Wiki

Getting the credentials.json file

  • Visit the Google Cloud Platform
  • Go to the OAuth consent screen tab, fill the form and save it
  • Then go to the Credentials tab and click on Create Credentials -> OAuth client ID
  • Choose Desktop from the list and click on the Create button
  • Now click on the Download JSON button to download the credentials file
  • Move that file to the root of the repo and rename it to credentials.json
  • Then run the below script
    pip3 install google-api-python-client google-auth-httplib2 google-auth-oauthlib
    

Getting the drive_list file

  • Run the below script and follow the screen to get the drive_list file
    python3 gen_list.py
    

Getting the token.json file

  • Run the below script to generate the token.json file
    python3 gen_token.py
    
    Note: The token.pickle file can be converted to token.json by running the script

Setting up the config.env file

Rename config_sample.env to config.env

Required config

  • BOT_TOKEN: Get this token by creating a bot in @BotFather
  • OWNER_ID: Fill the user ID of bot owner
  • DRIVE_FOLDER_ID: Fill the folder ID of drive where the data will be cloned

Optional config

  • AUTHORIZED_USERS: Fill the user_id and/or chat_id you want to authorize, separate them with space.Example: "1234567890 -1122334455 921229569"
  • DATABASE_URL: Create a cluster on MongoDB to get this value
  • IS_TEAM_DRIVE: Set to True if the DRIVE_FOLDER_ID is from a Shared Drive
  • USE_SERVICE_ACCOUNTS: Set to True if the data to be cloned using Service Accounts. Refer to Getting the Service Account files for this to work.
  • DOWNLOAD_DIR: Fill the path of local folder where the data will be downloaded
  • STATUS_UPDATE_INTERVAL: Interval in seconds after the task progress will be updated (Default is set to 10)
  • TELEGRAPH_ACCS: Choose how many telegraph tokens will be generated (Default is set to 1)
  • INDEX_URL: Refer to maple's GDIndex or Bhadoo's Google Drive IndexNote: The Index URL should not have any trailing '/'
  • ARCHIVE_LIMIT: Set the size limit of data compression and extraction tasks Note: Only integer value is supported. Default unit is set to 'GB'
  • CLONE_LIMIT: Set the size limit of data clone tasks Note: Only integer value is supported. Default unit is set to 'GB'
  • TOKEN_JSON_URL: Fill the direct download link of token.json file
  • ACCOUNTS_ZIP_URL: Archive the accounts folder to a zip file. Then fill the direct download link of that file.
  • DRIVE_LIST_URL: Upload the drive_list file on GitHub Gist. Now open the raw link of that gist and remove commit id from the link. Then fill the var with that link.Note: This var is required for Deploying with Workflow to Heroku.
  • GDTOT_CRYPT: Refer to Getting the GDToT cookies and fill the var with CRYPT value

Getting the Service Account files

Warning: Abuse of this feature is not the aim and not recommended to make a lot of projects, just one project and 100 SAs will allow a plenty of use. It's also possible that over abuse might get the projects banned by Google.

NOTE: If you have created SAs in the past from this script, you can also just re-download the keys by running the below script

python3 gen_sa.py --download-keys PROJECTID

These two methods are available for creating Service Accounts

Creating SAs in existing project (Recommended)
  • List projects ids
    python3 gen_sa.py --list-projects
    
  • Enable services
    python3 gen_sa.py --enable-services PROJECTID
    
  • Create Sevice Accounts
    python3 gen_sa.py --create-sas PROJECTID
    
  • Download Sevice Accounts
    python3 gen_sa.py --download-keys PROJECTID
    

    Note: Remember to replace PROJECTID with your project's id

Creating SAs in new project
  • Run the below script to generate Service Accounts and download automatically
    python3 gen_sa_accounts.py --quick-setup 1 --new-only
    

Getting the GDToT cookies

  • Login / Register to GDToT
  • Copy paste the below script on the address bar
    javascript:(document.cookie);