I’m a Engineer / Architect / Builder. For more details about me, see About.
This is a reboot for writing my notes and thoughts online after several years of pause.
Recent Posts
-
Beyond Answers, Below Autonomy: How Proactive AI Agents Offload Humans Without Overstepping
Proactive agents can offload humans from the glue work between insight and execution: they watch your systems, gather context, and turn signals into decision-ready options and actions. The key is staying below autonomy, because implicit context and accountability still sit with humans.
-
Using BERT to perform Topic Tag Prediction for Technical Articles
Updated:Experiments using BERT Mini embeddings and linear SVM for multilabel tag prediction on LinkedInfo articles.
-
A Walk Through of the IEEE-CIS Fraud Detection Challenge
Walkthrough of the IEEE-CIS fraud detection challenge with feature analysis and model experiments.
-
Skin Lesion Image Classification with Deep Convolutional Neural Networks
Updated:Deep CNN experiments (DenseNet/ResNet) on HAM10000 for skin lesion image classification.
Latest Notes
-
Make Yourself an Outlier
The only way to avoid being replaced by AI is to make yourself an outlier. LLMs and agents can easily make you averagely good, but they won't make you excellent. Because excellence is, by definition, an outlier.
-
About Scaling of Sleep-time Compute for Agents
You probably often hear scaling law, pre-training scaling, post-training scaling or test-time scaling, but you may not often hear sleep-time scaling. It's worth a few minutes to talk about it and how it would scale on the volume of context/memory, latency, and proactiveness.
-
Human-Written AGENTS.md Still Wins
LLM-generated repo context files can be redundant and even hurt coding-agent performance, while human-written ones are more likely to close the real context gap.
-
Context Learning Is Still Harder Than It Looks
CL-bench shows that frontier models still struggle to genuinely learn and apply novel knowledge that was not part of pre-training.