Dev Environment ‐ Set up on Phone - FullstackCodingGuy/Developer-Fundamentals GitHub Wiki

Installation steps

  • Update termux: apt-get update && apt-get upgrade -y
  • Install wget: apt-get install wget -y
  • Install proot: apt-get install proot -y
  • Install git: apt-get install git -y
  • Go to HOME folder: cd ~
  • Download script: git clone https://github.com/MFDGaming/ubuntu-in-termux.git
  • Go to script folder: cd ubuntu-in-termux
  • Give execution permission: chmod +x ubuntu.sh
  • Run the script: ./ubuntu.sh -y
  • Now just start ubuntu: ./startubuntu.sh

To Check Ubuntu Version

lsb_release -a

Termux

How to install Termux X11 native DESKTOP on ANDROID (no proot) - [No Root] - Linux on Android

Setup docker-qemu-arm

git clone https://github.com/antonyz89/docker-qemu-arm
cd docker-qemu-arm
./termux-setup.sh

Refer: https://github.com/AntonyZ89/docker-qemu-arm

How to run docker on android

read

Install the docker-qemu-arm

Login into alphine by navigating to folder alphine

cd alphine
./startqemu.sh

user: root password: Secret123

Port forwarding

Important: Run time layer: Android -> Termux -> Alphine -> Docker Daemon Instance -> Containers

So, port forwarding is mandatory to expose the container port from alphine to termux - in order to access the endpoints.

How to forward a port from Termux native to Alpine to be able to access the apps:

From the docker-qemu-arm/alpine folder:

ssh -i qemukey -L 8080:localhost:4647 root@localhost -p 2222

This will forward the port 8080 in Termux native to the port 4647 in the Alpine container.

Choosing llm for mobile

read

Since you're running everything offline on your Samsung Galaxy S23 Ultra, you need a model that balances performance, accuracy, and efficiency on mobile hardware. Here are some options based on different use cases:


  1. Best for General Chat & Reasoning (Balanced Performance)

✅ Mistral 7B (GGUF or GPTQ)

Why? One of the best small models with strong reasoning & efficiency.

Size: ~4GB (4-bit quantized)

Where to get it?

GGUF (for llama.cpp) → Mistral 7B GGUF (TheBloke)

GPTQ (for AutoGPTQ/transformers) → Mistral 7B GPTQ (TheBloke)

Best for: Chatbots, general Q&A, reasoning


  1. Best for Travel & Information Retrieval (Small & Efficient)

✅ Phi-2 (2.7B)

Why? Lightweight model with decent reasoning & small size.

Size: ~2GB (4-bit quantized)

Where to get it? Phi-2 (TheBloke)

Best for: Running RAG with travel PDFs, embeddings, and chatbot use


  1. Best for Large-Scale RAG (If You Can Handle It)

✅ Llama 3 (8B)

Why? Newest Meta model, better than Llama 2, but heavy for mobile.

Size: ~5-8GB (quantized)

Where to get it? Llama 3 GGUF (TheBloke)

Best for: If you want the latest tech & can handle larger models.


  1. Best for Super Small & Fast Execution (Low RAM)

✅ Gemma 2B (Google)

Why? Very light, best for devices with lower RAM (fast but less powerful).

Size: ~1.5GB (quantized)

Where to get it? Gemma 2B GGUF

Best for: Super-lightweight chatbot & quick inference


Which One Should You Choose?


Next Steps

  1. Which model do you prefer?

  2. Do you need help setting it up with LangChain?

  3. Are you building a chatbot, RAG system, or another project?

Let me know, and I’ll guide you through the next steps!

Setup Ollama in S23 using Termux

read

Steps to Run Llama 3.2 on Android

1. Install Termux on Android

Termux is a terminal emulator that allows Android devices to run a Linux environment without needing root access. It’s available for free and can be downloaded from the Termux GitHub page.

For this guide, download the termux-app_v0.119.0-beta.1+apt-android-7-github-debug_arm64-v8a.apk and install it on your Android device.

2. Set Up Termux

After launching Termux, follow these steps to set up the environment:

Grant Storage Access:

termux-setup-storage

Update Packages:

pkg update && pkg upgrade

Install Essential Tools:

pkg install git cmake golang

Install and Compile Ollama

Ollama is a platform for running large models locally. Here’s how to install and set it up:

Clone Ollama's GitHub Repository:

git clone --depth 1 https://github.com/ollama/ollama.git
cd ollama
go generate ./...
go build .
./ollama serve &

Now the Ollama server will run in the background, allowing you to interact with the models.

Running Llama 3.2 Models

To run the Llama 3.2 model on your Android device, follow these steps:

Choose a Model:

Models like llama3.2:3b (3 billion parameters) are available for testing. These models are quantized for efficiency. You can find a list of available models on Ollama’s website. Download and Run the Llama 3.2 Model:

./ollama run llama3.2:3b --verbose

Optional Cleanup

After using Ollama, you may want to clean up the system:

Remove Unnecessary Files:

chmod -R 700 ~/go
rm -r ~/go

Move the Ollama Binary to a Global Path to run ollama directly from the terminal

cp ollama/ollama /data/data/com.termux/files/usr/bin/

How to package nginx app and dockerize and run it

References

⚠️ **GitHub.com Fallback** ⚠️