Page Index - ObrienlabsDev/blog GitHub Wiki
88 page(s) in this GitHub Wiki:
- Home
- Michael O'Brien | Cloud Engineer : Ex-Google | Cloud Architect / Developer | Java | CUDA | Kubernetes | GCP | AWS
- Articles
- Open Source
- Markdown
- diagrams
- Architecture
- Please reload this page
- AWS
- Please reload this page
- Biometric Dual Heart Rate Streaming from Mobile Devices
- Please reload this page
- Calling Google Cloud APIs privately from on prem using Private Service Connect
- Please reload this page
- Certification
- Please reload this page
- Cloud Design Patterns
- Please reload this page
- Code‐References
- Please reload this page
- Cross‐Cloud
- Please reload this page
- CUDA based ‐ High Performance Computing ‐ LLM Training ‐ Ground to GCP Cloud Hybrid
- Please reload this page
- Developer Guide
- Please reload this page
- Developers using Large Language Models to help write and deploy Code
- Please reload this page
- Drone Streaming Extraction
- Please reload this page
- FinOps
- Please reload this page
- Google Cloud
- Please reload this page
- Google Cloud API SDK client call from Java Spring Boot Application
- Please reload this page
- Google Cloud Earth Engine ‐ HPC integration
- Please reload this page
- Google Cloud Hybrid Private Ground to Cloud Networking with Private Service Connect
- Please reload this page
- Google Cloud Landing Zone Comparisons
- Please reload this page
- Google Gemini LLM
- Please reload this page
- Google Gemma 2 27b Model Inference
- Please reload this page
- Google Maps API based Javascript Web Application
- Please reload this page
- Hardware
- Please reload this page
- Java 21 on Google Cloud for Overly Enthusiastic Developers
- Please reload this page
- Kubernetes
- Please reload this page
- Languages
- Please reload this page
- LLM and Machine Learning Custom Hardware
- Please reload this page
- LLMs ‐ Yes and Why
- Please reload this page
- Machine Learning
- Please reload this page
- Machine Learning for Java Developers
- Please reload this page
- Machine Learning on local or Cloud based NVidia or Apple GPUs
- Please reload this page
- Performance
- Please reload this page
- Personal
- Please reload this page
- Quantum Computing
- Please reload this page
- ReadingList
- Please reload this page
- Retrieval Augmented Generation
- Please reload this page
- Running the larger Google Gemma 7B 35GB LLM on a single GPU for over 7x Inference Performance gains
- Please reload this page
- Serverless Reference Architecture
- Please reload this page
- TODO
- Please reload this page
- Training
- Please reload this page
- Using Duet AI for LLM based Acceleration in GCP
- Please reload this page
- Virtualization
- Please reload this page