Software I use, gadgets I love,
and other things I recommend.
I get asked a lot about the tools I use to build software and conduct AI research. This is a living document of the gear and apps that keep me productive.
Daily drivers
The short answer
The physical setup I spend the most hours with, from the laptop to the devices around it.
Apple MacBook Pro (16-inch, M4 Pro chip, 24GB Unified Memory, 512GB)
macOS for programming and browsing, with heavier runs pushed to Linux over SSH.
LG UltraGear 32-inch (x2)
External monitors above the laptop for browsers, note-taking, terminals, docs, and similar side-by-side work.
Apple iPad Pro 3rd Generation (12.9-inch, 256GB)
Reading books and taking hand-written notes.
Sony WH-1000XM5
Work headphones, comfortable and noise-cancelling.
Beyerdynamic DT 770 Pro (250 ohm)
Home headphones, comfortable and noise-cancelling.
Anker Soundcore Sport X20
Gym headphones, comfortable and noise-cancelling + hard to fall off.
HUAWEI FreeClip
Outdoor headphones, I can hear my surroundings but still enjoy my music/podcasts.
The tools I open constantly for browsing, planning, notes, design, AI, and life outside work.
Browsing & desktop
Google Chrome
Where I let an AI agent drive the browser with computer-use style control when I need automation beyond normal browsing.
Raycast
My launcher of choice on macOS—faster and more capable than Spotlight for me. I use it for window management, clipboard history, and quick calculations.
Design & AI
LM Studio
Where I pull open-source models, try them locally, and spin up a quick server when I want to experiment off the cloud.
The languages, editors, frameworks, and delivery layer I use when I am building products.
Languages
Python
Default language for ML, scripts, and most backend work.
TypeScript / JavaScript
TypeScript and JavaScript for web and mobile work: typed UI and API layers where it helps, plain JS where browser-native or untyped spots are enough.
SQL
Relational queries, analytics, and schema design.
Bash
Shell scripting, glue between tools, and automation.
Web & mobile
Next.js
React-based web framework with routing, SSR, and API routes in one stack.
Astro
Static websites and content-heavy pages with minimal client JS when I want speed and simple deploys.
React Native
Cross-platform mobile apps with a shared JavaScript/TypeScript core.
Node.js / Express.js
JavaScript runtime for CLIs and services, plus Express when I need routing and middleware.
FastAPI
Async Python APIs and lightweight model serving with clear OpenAPI surfaces.
Delivery & access
Tailscale
Private network between my laptop and remote machines so SSH and internal services stay easy to reach.
GitHub Actions
CI for tests, builds, and lightweight automation on every push.
Cloudflare Tunnel
Publishes the endpoints that need to be reachable without opening up the whole machine.
Cloudflare
DNS, CDN, security, and edge plumbing for domains I run.
The systems underneath the work: data tooling, research stack, rented compute, and production orchestration.
Data & analytics
NumPy
N-dimensional arrays and vectorized numerics in Python.
Pandas
Tables, joins, and time-series prep before modeling or SQL.
PostgreSQL
Relational source of truth for apps, features, and anything that needs strong consistency.
ClickHouse
Columnar OLAP store for analytics, wide event logs, and heavy aggregations without slowing OLTP.
MongoDB
Document database for flexible schemas and typical NoSQL workloads.
Redpanda / Kafka
Kafka-style streaming and durable logs for pipelines, with Redpanda as the implementation I reach for first.
ML & research
PyTorch
Primary ML framework for research and custom model work.
PyTorch Lightning
Higher-level training loops and project structure when I want PyTorch with less boilerplate.
scikit-learn
Classical ML baselines, preprocessing, and tabular pipelines.
Hugging Face
Models, datasets, and the Transformers stack when I train or ship neural nets.
Jupyter
Interactive notebooks for quick probes, figures, and reproducible snippets.
Weights & Biases (W&B)
Rich experiment dashboards and sweeps for research iterations and sharing runs.
MLflow
Registry-backed tracking and deployment workflows when the product needs a governed ML lifecycle.
DSPy
Structured LLM programs, optimizers, and eval loops instead of one-off prompt spaghetti.
Unsloth
Memory-efficient fine-tuning when I need LoRA/PEFT without maxing VRAM.
OpenRouter
Unified API access to many LLMs so I can call models from one key instead of juggling every provider separately.
Compute & hosting
Jarvislab
GPU hosts I SSH into for training and heavier ML jobs I do not want on the laptop.
Vast.ai
Cheap rented GPUs for quick experiments and training sprints—not always the most stable, but fast to spin up when cost matters more than polish.
Hetzner
Web servers and microservices—cheap, reliable Linux when something should stay online.
Modal
Serverless deployment for AI models and GPU jobs without babysitting clusters for one-off or bursty workloads.
Serving & orchestration
Docker
Portable environments from laptop to CI to cloud so runs match everywhere.
Kubernetes
Orchestration when workloads need rollouts, scaling, and multi-node ops.
Redis
Caches, rate limits, and Celery broker when jobs need to fan out fast.
Celery
Distributed task queues for long-running training jobs, evals, and batch pipelines.
Temporal
Durable workflows and reliable orchestration when pipelines need retries, timers, and human steps without losing state.
vLLM
High-throughput LLM inference when latency and batching matter in production.
Alembic
SQLAlchemy migrations when schema changes need reviewable history and repeatable deploys.