Notes

Atom Feed

Coordinate to Timezone Lookup Library

April 5, 2026

Looking up the timezone for a given latitude/longitude coordinate is a common problem in some software domains. There are several databases like timezone-boundary-builder which provide mappings of geo polygons to timezones which are updated with political changes. However, the full GeoJSON file is large and complex. Doing naive “is coordinate-in-polygon” lookups can take milliseconds and require hundreds of megabytes of memory. I created the github.com/albertyw/localtimezone project with the goal of providing lat/lng->timezone lookups with accuracy and microsecond-level responses.

Data Generation

Stage 1: Converting Polygons to H3 Cells

H3 is Uber’s hierarchical hexagonal geospatial indexing system. It divides the world into hexagonal cells at multiple resolutions. At resolution 7, each cell covers about 5.16 square km. This was chosen as a tradeoff between higher resolutions that would slow down performance and lower resolutions that would trade off accuracy.

By using H3, instead of storing and computing polygon geometries, a lat/lng coordinate can be efficiently converted into an H3 cell and the H3 cell can be mapped to a timezone.

GeoJSON Polygons
      │
      ▼
  h3.PolygonToCells(polygon, resolution=7)
      │
      ▼
  Set of H3 cell IDs for each timezone

The conversion of polygons to H3 cells is done in a go generate function and the resulting H3-> timezone mapping is committed into the library. The generator (tzshapefilegen/main.go) downloads the latest timezone GeoJSON release, processes each feature in parallel via goroutines, and collects all the H3 cell IDs for each timezone.

Stage 2: H3 Cell Compaction

A resolution-7 grid of the entire land surface of Earth produces millions of cells. Storing all of them is wasteful, because many neighboring cells belong to the same timezone. H3 provides a compaction operation that exploits its hierarchical structure: when all 7 children of a parent cell all map to the same timezone, they can be replaced by their single parent cell at the next coarser resolution.

Before compaction:        After compaction:
  7 children cells          1 parent cell (resolution 6)
  at resolution 7
  ┌───┬───┬───┐             ┌─────────────┐
  │ A │ A │ A │             │      A      │
  ├───┼───┼───┤      →      │   (parent)  │
  │ A │ A │ A │             └─────────────┘
  ├───┼───┤
  │ A │
  └───┘

(Imagine these are hexagons, not rectangles)

This recursively compresses large homogeneous regions like Asia/Shanghai and Europe/Moscow into much smaller representations. The result is a mixed-resolution set of cells - fine-grained cells near timezone borders, coarser cells in wide uniform regions. In localtimezone, this results in a 97% reduction of cells from H3 compaction (32886K cells to 896K cells).

Stage 3: The Binary Format

To allow the client to read the H3 data fast, H3->timezone data is serialized into a compact custom binary format called H3TZ:

Offset  Size   Field
──────────────────────────────────────────
0       4      Magic: "H3TZ"
4       1      Version (1)
5       1      H3 resolution used
6       2      Number of timezone names (uint16)
8       ...    String table: [uint16 len][bytes] per timezone name
...     4      Cell count (uint32)
...     N×10   Cell entries: [int64 cell ID][uint16 tz index]

The string table stores timezone names once, and each cell entry references a name by its uint16 index. Each cell entry is a fixed 10 bytes.

The cells array is sorted by cell ID before writing. This is critical for the binary search used at query time.

Stage 4: S2 Compression

The raw binary data is then compressed using S2, a high-speed compression algorithm derived from Snappy. S2 is optimized for decompression throughput rather than maximum compression ratio — the entire dataset is decompressed once at client initialization, so decompression speed matters more than final file size. See the previous post about Go Compression Benchmark Results for more information.

Stage 5: Embedding in the Binary

The compressed data file is embedded at compile time:

//go:embed data.h3.s2
var TZData []byte

There is no file I/O at runtime. The timezone data ships inside the Go binary itself. The only cost is binary size and the one-time decompression at startup.

Library Initialization and Lookup

Initialization: Decompression and Indexing

When NewLocalTimeZone() is called, it:

  1. Decompresses the S2-compressed blob
  2. Parses the H3TZ binary header and string table
  3. Bulk-reads all cell entries into two parallel slices: cells []int64 (sorted cell IDs) and tzIdx []uint16 (timezone name indices)
  4. Stores the result in an atomic.Pointer[immutableCache]
S2-compressed bytes  →  decompress  →  parse H3TZ  →  immutableCache
                                                         ├── tzNames []string
                                                         ├── cells   []int64   (sorted)
                                                         └── tzIdx   []uint16

The immutable cache makes the client safe for concurrent access without locks. The atomic.Pointer allows the cache to be replaced atomically if new data is loaded, though in practice the embedded data never changes at runtime.

Client startup takes about 5ms and uses ~17MB of RAM.

Lookup: Binary Search with Multi-Resolution Fallback

A timezone lookup for a (lat, lon) point works like this:

  1. Convert point to H3 cell at resolution 7
  2. For each resolution from 7 down to 0: a. Compute the cell at this resolution (cell or its ancestors) b. Binary search in the sorted cells array c. Scan forward to collect all entries with that cell ID (for overlapping zones)
  3. Fallback: Compute timezone from neighboring H3 cells or based on longitude

Step 2 handles the compacted cells. If a point’s exact resolution-7 cell isn’t in the table, its resolution-6 parent might be (because that whole region was compacted). The lookup scans all resolutions, collecting matches at each level to handle overlapping zones.

The binary search is O(log N) over the sorted cells array. With ~900K cells, this is still only about 20 comparisons. The actual measured response time is around 1 microsecond per lookup.

Benchmark Numbers

BenchmarkGetZone/GetZone_on_large_cities       989595     1205 ns/op
BenchmarkGetZone/GetOneZone_on_large_cities    1000000    1006 ns/op
BenchmarkClientInit/main_client                247        4772720 ns/op   (~5ms)

About a million lookups per second per core, with a ~5ms cold start.

Architecture Summary

The localtimezone library makes use of these tricks to increase speed of initialization and lookup:

  1. Preprocessing polygon geojsons into H3 for O(log N) cell-based lookups
  2. Compacting H3 cells to minimize memory usage and reduce binary search space
  3. Serializing H3 data into a binary format for faster loading
  4. Compression with S2 for fast decompression
  5. Binary search over a sorted cell array for fast lookups

By combining these tricks, the library maintains acceptable accuracy (down to 5 square kilometer resolution) while returning timezone values for anywhere in the world in microseconds while using ~17MB of memory.

Permalink

Go Compression Benchmark Results

March 19, 2026

This is an LLM-generated post that summarizes benchmarks used to choose compression libraries for the localtimezone lat/lng -> timezone library

A benchmark of pure-Go compression libraries against real-world binary data: H3 timezone cells from localtimezone, a library that needed fast and efficient decompression of an ~8.5 MB binary file. All ten libraries tested are pure Go with no CGo dependencies, making the results directly applicable to any Go project that needs cross-platform compression.

The benchmark code is available at github.com/albertyw/go-compression-benchmark. Full per-run result tables: data.h3 results · data_mock.h3 results.


Key Findings

Best compression ratio: XZ/LZMA2 — but at a steep cost

XZ achieves a 10.33x ratio on the 8.5 MB file, shrinking it to 848 KB. The catch: compression takes 450ms and uses 64 MB of memory with nearly a million allocations. Decompression is more reasonable at 97ms. If you compress offline and only pay the decompression cost at runtime, this can work — but the memory spike and allocation count during compression rule it out for any hot path.

Best balance: Zstd

Zstd is the standout. At SpeedFastest it achieves a 5.78x ratio in just 31ms with only 16 MB of memory and 7 allocations to decompress. Stepping up to SpeedBestCompression improves the ratio to 6.67x at the cost of 357ms compression time. Decompression is consistently fast at ~13ms across all levels. For most use cases, Zstd SpeedFastest or SpeedDefault is the right call.

Fastest compression throughput: pgzip and S2

pgzip (parallel gzip) compresses the 8.5 MB file in 5.6ms at BestSpeed by using multiple CPU cores. S2 Better is close at 4.9ms single-threaded. Both pay for this speed in ratio: pgzip gets ~5.16x, S2 Better only 2.40x. These are strong choices for pipelines where compression is on the critical path and throughput matters more than size.

Fastest decompression: Snappy and S2

Snappy decompresses in 4.6ms with just 1 allocation — the lowest overhead of any library. S2 Best is similarly fast at 5ms. These are the right choices if decompression latency is the primary concern and a lower compression ratio is acceptable.

Brotli: excellent ratio, terrible compression speed

Brotli Best (11) achieves 7.79x — the second-best ratio — but takes 10.65 seconds and 802 MB of memory to compress. Even Default (6) takes 302ms. Brotli is designed for serving pre-compressed static assets, not on-the-fly compression. Use it only when you compress once and serve many times.

Gzip stdlib: the safe default

If you want zero dependencies beyond the standard library, compress/gzip at Default gives a solid 5.36x ratio in 197ms. The klauspost drop-in replacement is faster (35ms at Default) with identical output format, making it a worthwhile swap whenever gzip interoperability is required.


Summary Table (8.5 MB file)

Library Level Ratio Compress Decompress Notes
XZ/LZMA2 10.33x 450ms 97ms Best ratio, slow
Brotli Best (11) 7.79x 10.65s 36ms Offline use only
Zstd SpeedBestCompression 6.67x 357ms 13ms Best ratio+speed tradeoff
Zstd SpeedFastest 5.78x 31ms 13ms Best all-around
Gzip stdlib Default 5.36x 197ms 24ms Zero dependencies
Gzip klauspost Default 5.26x 35ms 15ms Fast stdlib replacement
pgzip BestSpeed 5.16x 5.6ms 17ms Fastest compress (parallel)
S2 Better 2.40x 4.9ms 7ms Fast, low ratio
Snappy 2.19x 7.6ms 4.6ms Fastest decompress

Methodology

  • Data: sample_data/data.h3 (~8.5 MB real H3 timezone binary from localtimezone), sample_data/data_mock.h3 (~1.2 KB mock)
  • Iterations: 2 warmup + 10 measured; median values reported
  • Timing and memory measured in separate passes to avoid ReadMemStats stop-the-world pauses distorting timing
  • Environment: AMD Ryzen 9 7900X, amd64, linux, Go 1.26.1
  • All libraries are pure Go — no CGo dependencies

When to use each library

  • Zstd — the default choice for new projects. Excellent ratio across all speed levels, fast decompression, low allocations, and a stable RFC-standardized format.
  • XZ/LZMA2 — when ratio is paramount, compression is offline, and you can tolerate slow compression and high memory usage.
  • Brotli — HTTP serving of static assets (CSS, JS, fonts). Pre-compress and cache; never compress on-the-fly.
  • Gzip stdlib — when you need zero non-stdlib dependencies or interoperability with existing gzip consumers. Swap to klauspost for free speed gains.
  • pgzip — when you need gzip output but compression throughput is a bottleneck; scales with core count.
  • LZ4 — when decompression throughput is the absolute priority and you’re operating at memory-bandwidth speeds. Common in storage systems and databases.
  • Snappy / S2 — lightweight, fast decompression with minimal allocations. S2 is strictly better than Snappy in pure-Go contexts.

Library Backgrounds

gzip / DEFLATE

DEFLATE is a lossless compression algorithm invented by Phil Katz in 1993 and formally specified in RFC 1951 (1996). It combines LZ77 — which replaces repeated byte sequences with back-references using a sliding window — and Huffman coding, which assigns shorter bit strings to more frequent symbols. The gzip file format was created by Jean-Loup Gailly in 1992 as a patent-free replacement for the Unix compress utility, whose LZW algorithm was encumbered by Unisys patents. DEFLATE became the backbone of a generation of internet infrastructure: it is the compression algorithm inside ZIP archives, PNG images, TLS connections, and HTTP content-encoding. The Go standard library provides compress/gzip and compress/flate; the klauspost/compress library offers drop-in replacements with assembly-optimized paths on amd64 that are substantially faster.

Zstandard (Zstd)

Zstandard was developed by Yann Collet at Facebook (now Meta) and open-sourced in August 2016. Its goal was to be a modern replacement for zlib that improves on all metrics simultaneously — compression speed, decompression speed, and ratio. Like DEFLATE it uses LZ77-style dictionary matching, but pairs it with a larger search window and a fast entropy coder based on Finite State Entropy (FSE), a variant of Asymmetric Numeral Systems (ANS). The algorithm was standardized as RFC 8478 in 2018. Adoption has been sweeping: Facebook uses it across its entire data infrastructure, the Linux kernel adopted it for module and filesystem compression, Fedora switched RPM package compression to Zstd in 2019, and Chrome and Firefox both added Content-Encoding: zstd HTTP support in 2024.

Brotli

Brotli was created at Google by Jyrki Alakuijala and Zoltán Szabadka in 2013, originally to reduce the size of WOFF2 web font transfers. Unlike Google’s earlier Zopfli (a superior DEFLATE compressor), Brotli introduced an entirely new format using a modern LZ77 variant, Huffman coding, second-order context modeling, and a large static dictionary of common words drawn from web content. It was generalized for HTTP content-encoding and standardized as RFC 7932 in 2016. Brotli is today the dominant HTTP compression algorithm: all major browsers support it, and Cloudflare, Akamai, AWS CloudFront, Nginx, and Apache all serve it. The WOFF2 font format — which depends on Brotli — received a Technology and Engineering Emmy Award in 2021. Its extreme compression ratios come at a steep compression-time cost, making it best suited for pre-compressing static assets offline rather than on-the-fly.

Snappy

Snappy (originally called “Zippy” internally) was developed at Google by Jeff Dean and Sanjay Ghemawat and open-sourced in March 2011. It was designed not for maximum ratio but for very high throughput, targeting CPU-bound scenarios inside Google’s own infrastructure — MapReduce, Bigtable, and internal RPC systems — where decompression speed is the bottleneck. The algorithm is LZ77-inspired and deliberately avoids entropy coding, accepting a lower ratio in exchange for simplicity and speed. Its wide adoption in open-source infrastructure is notable: Snappy is the default compression algorithm for MongoDB, RocksDB, LevelDB, Apache Cassandra, Hadoop, Apache Parquet, and InfluxDB.

S2

S2 is an extension and improvement of the Snappy format developed by Klaus Post as part of his klauspost/compress Go library, first introduced in August 2019. While Snappy already prioritized speed, S2 redesigns the block format and encoding strategy to simultaneously improve both ratio and throughput. S2 can decompress all valid Snappy data (backward compatible as a reader), but its own output is not readable by the original Snappy library — though it can optionally emit Snappy-compatible output at higher speed than Snappy itself. On typical machine-generated data, S2 in default mode can reduce compressed size by up to 35% compared to Snappy while improving decompression speed. On AMD64 with assembly-optimized paths, S2 stream compression exceeds 10 GB/s.

LZ4

LZ4 was developed by Yann Collet (who later also created Zstandard) and first released in April 2011. Its singular design goal is extreme speed: compression throughput routinely exceeds 500 MB/s per core and decompression can exceed 1 GB/s per core, making it one of the fastest compressors ever published. Like its LZ-family relatives it uses dictionary matching, but with a deliberately simple scheme that minimizes branch mispredictions and memory accesses. LZ4 trades ratio for that speed and is not competitive with gzip or Zstd on ratio. It was integrated into the Linux kernel in version 3.11 for SquashFS, pstore, and crypto layer compression, and ZFS on Linux, FreeBSD, and macOS supports it for transparent filesystem compression.

XZ / LZMA2

LZMA (Lempel–Ziv–Markov chain algorithm) was developed by Igor Pavlov starting in 1998 and became the compression engine powering the 7-Zip archiver’s 7z format. LZMA achieves exceptional ratios by combining a very large dictionary (up to 4 GB), a sophisticated match finder, and range encoding (an arithmetic-coding variant). LZMA2 adds multi-threaded compression by splitting data into independently compressed LZMA streams. The xz file format and XZ Utils were released in 2009 by Lasse Collin as a bzip2 successor, and XZ became the standard for distributing Linux kernel sources and software packages across Fedora, Debian, and Ubuntu — though both have since migrated to Zstandard. XZ gained unwanted notoriety in March 2024 when a supply-chain backdoor was discovered in XZ Utils 5.6.0 and 5.6.1.

pgzip

pgzip is a pure-Go parallel gzip library developed by Klaus Post (github.com/klauspost/pgzip). It is a drop-in replacement for the standard library’s compress/gzip, producing fully standard-compliant output that any gzip reader can decompress — the parallelism is transparent to consumers. Internally, pgzip splits input into independent blocks (defaulting to 1 MB each) and compresses them concurrently across available CPU cores, then stitches the resulting gzip members together. On multi-core hardware, compression throughput scales roughly linearly with core count, and pgzip also offers a Huffman-only mode reaching ~450 MB/s per core when ratio is secondary to predictable speed. It is the natural choice when gzip compatibility is required but single-threaded gzip becomes a bottleneck.

Permalink

California Oak Tree Identification

February 7, 2026

Location Leaf Trunk & bark Size Other characteristics Species
Coastal 1‑2.5 in long, thick leathery, spiny margins; glossy green above, faint hair line in vein axils below Trunk grows to 3 ft thick; bark smooth gray‑brown when young, darker gray with broad ridges Up to 100 ft tall Evergreen, dense rounded crown Coast Live Oak (Quercus agrifolia)
Coastal 2‑6 in long, 3‑7 deep lobes, finely haired underside Trunk 3 ft thick (up to 5 ft), bark gray and fissured Up to 100 ft tall Deciduous, alligator‑like bark; acorns mature in one year Oregon White Oak Oak (Quercus garryana)
Inland 5‑10 cm (2‑4 in) long, round deeply lobed; matte green top, pale green underside, soft fuzz Bark alligator‑hide ridged; trunk up to 10 ft diameter Up to 98 ft tall, trunk up to 10 ft diameter Deciduous; acorns 2‑3 cm; masting Valley Oak (Quercus lobata)
Inland 4‑8 in long, 6 lobes, bristle‑tipped Bark dark with small plates; grey bark 30‑80 ft tall, trunk 2 ft diameter Acorns 2‑3 in, mature second season Black Oak (Quercus kelloggii)
Inland Thick leathery, 1‑3.5 in long, margins spinose or entire, fuzzy then smooth Bark thin ~1 in, smooth, gray‑brown, may develop small tight scales Up to 80 ft tall, 2 ft diameter Acorns 1/2‑1.5 in long, two seasons to mature; twig slender; crown may be dense shrub or tree Canyon Live Oak (Quercus chrysolepis)
Inland 1.5‑2 in long, entire or sharply pointed teeth; flat shiny green above, yellow‑green below Young bark smooth gray; older rough irregularly furrowed with scaly ridges; short trunk, broad crown 30‑75 ft tall, spread 30‑80 ft Acorns 1‑1.5 in, mature two seasons; male catkins 2‑3 in Interior Live Oak (Quercus wislizeni)
Inland 1‑3 in long, wavy margins, bluish‑green upper, pale lower Bark light gray checkered ≤60 ft tall, 2 ft diameter Acorns 0.75‑1.5 in, single season Blue Oak (Quercus douglasii)
Inland 1.5‑3 in long, leathery, entire or few sharp teeth; dull blue‑gray above, greener below, somewhat fuzzy Bark gray with narrow scaly ridges, shallow furrows Up to 50 ft tall, short crooked trunk, large twisted limbs, sparse crown Acorns 1 in long with thick warty cap, mature one season Engelmann Oak (Quercus engelmannii)

Permalink

Claude Code With Ollama Setup

January 31, 2026

I tried claude-code-router with Ollama and it didn’t really work with Ollama due to mismatching input/output formats. Even Claude Code with Anthropic doesn’t work well out-of-the box with local (not cloud) models. Instead, at least in my experience, you have to do some more setup to get a useful Claude Code CLI working with Ollama.

The below is a from-scratch setup that makes Claude Code run with a locally running gpt-oss:20b model on Ollama. It’s not as strong as cloud-based SOTA models but at least this runs locally with 32GB of memory and an Nvidia RTX 4090:

  1. Install ollama and claude code

    curl -fsSL https://claude.ai/install.sh | bash
    curl -fsSL https://ollama.com/install.sh | sh
    
  2. Declare a new ollama model with expanded context:

    # create this file with the name "Modelfile"
    FROM gpt-oss:20b
    PARAMETER num_ctx 65536
    
  3. Create model

    ollama create gpt-oss-64k -f Modelfile
    
  4. Set env vars

    # Recommended: add this to ~/.bashrc
    export ANTHROPIC_AUTH_TOKEN=ollama
    export ANTHROPIC_BASE_URL=http://localhost:11434
    
  5. Run claude code

    claude --model gpt-oss-64k
    
  6. Test prompts:

    > list files in current directory
    > create a python script that calculates the first 10 fibonacci numbers called fib.py
    > Run fib.py and show its output
    

Permalink

Switching From Python requirements.txt to pyproject.toml

December 6, 2025

In python, a lot of projects are switching over from the old requirements.txt format of declaring dependencies to pyproject.toml and others. pyproject.toml can condense multiple requirements-*.txt files as well as other package metadata into a single file, simplifying several package maintenance processes.

The steps to switch are:

  1. Create a bare bones pyproject.toml file in the root of your package:

    [project]
    name = "<NAME>"
    version = "<VERSION>"
    dependencies = [
        "dependency1",
        "dependency2",
    ]
    

    Copy a name and version from your setup.py if you’re still using that. Copy dependencies from your requirements.txt. Dependencies in pyproject.toml follow the same format as requirements.txt and support dependency versioning with ==, >=, and < limits.

  2. If you’ve split your requirements-test.txt or other types of dependencies, you can include them in your pyproject.toml as optional dependencies:

    [project]
    ...
    
    [project.optional-dependencies]
    test = [
        "dependency3",
        "dependency4",
    ]
    
  3. Delete your requirements.txt. Hopefully you’re using version control.

  4. Switch your build commands from pip install -r requirements.txt to pip install -e .. If you want to install optional test dependencies too, you can do that with pip install -e .[test].

Permalink

Generating Cloudflare Origin Certificate for Multiple Domains

November 8, 2025

Introductory Computer Science and Software Engineering Topics

August 3, 2025

PID Controller

June 29, 2025

Availability Percentages

February 8, 2025

Javascript/Typescript Decorators Suck

January 3, 2025

Upgrading MariaDB Database Versions

June 2, 2024

Concurrent Python Example

January 1, 2024

Updating UUIDField on MariaDB to Django 5

December 27, 2023

Replacing Setup.py

December 7, 2023

Fixing Mariadb --Column-Statistics Errors

June 5, 2023

Geographic Geometry Simplification

February 2, 2023

Linters

January 2, 2023

Installing Mysqlclient in Python Slim Docker Image

December 29, 2022

Processor Trends

December 19, 2022

Resizing a Ubuntu Disk in a UTM VM

October 19, 2022

Bash File Test Operators

October 5, 2022

Python Generic Type Annotations

May 28, 2022

Mac Menubar Applications

February 17, 2022

Logodust

February 13, 2022

Python Releases

January 8, 2022

Debian Releases

January 7, 2022

Ubuntu Releases and Support Periods

December 18, 2021

Monitoring System CLIs (Top for X)

December 18, 2021

Fixing "EFI stub: Exiting boot services and installing virtual address map..."

December 11, 2021

ARM Support

November 29, 2021

Map Caps Lock to Escape for Vim

November 24, 2021

Download and Convert Youtube Playlists to MP3 Files

July 15, 2021

Nobody Ever Got Fired for Copying FAANG

June 27, 2021

Removing Token Authentication From Jupyter/iPython Notebooks

May 31, 2021

Debian and Ubuntu Releases

February 13, 2021

Setting Up FastAI Fastbook on a Fresh Ubuntu Instance

January 31, 2021

Tip for Developer Tools Startups

January 30, 2021

A Better Go Defer

October 20, 2020

Covid-19 Economy Predictions

October 13, 2020

Basic Docker Monitoring

July 4, 2020

Switching From Go Dep to Go Mod

May 30, 2020

Upgrading LibMySQLClient in Python MySQLDB/MySQLClient

May 25, 2020

Developing Django in Production

May 15, 2020

Quote

March 5, 2020

Sendmail Wrapper for Mailgun

March 1, 2020

Python Release Support Timeline

December 26, 2019

Use the Default Flake8 Ignores

December 14, 2019

Making Pip Require a Virtualenv

December 5, 2019

Engineering Toolbox

November 30, 2019

Node Timezones

November 1, 2019

Sampling Samples

August 21, 2019

Rotating a NxN Matrix in One Line of Python

July 27, 2019

iTerm2 Search History

July 19, 2019

Nginx Auth With IP Whitelists

June 29, 2019

Bash Strict Mode

May 11, 2019

Optimizing Asus Routers for Serving Websites With Cloudflare

May 5, 2019

Browserify, Mochify, Nyc, Envify, and Dotenv

April 1, 2019

Scraping Images From Tumblr

February 24, 2019

There Are Too Many NPM Packages

February 10, 2019

Programmers Writing Legal Documents

January 31, 2019

Solidity Review

November 17, 2018

Likwid

November 9, 2018

My First Server's IP

November 9, 2018

Installing Netdata

September 23, 2018

Interrobang Versus Shebang

July 10, 2018

Bad Interview Questions

July 8, 2018

Showing Users in Different Databases

July 7, 2018

Some MIT (Undergraduate) Admissions Interview Advice

July 4, 2018

Optimize the Develop-Test-Debug Cycle

April 22, 2018

Example of Python Subprocess

March 23, 2018

Spotted in Taiwan

January 20, 2018

Fixing "Fatal Error: Python.h: No Such File or Directory"

December 16, 2017

Cassandra Primary Keys

December 11, 2017

MyPy Review

November 2, 2017

Griping About Time Zones

October 26, 2017

Bundling Python Packages With PyInstaller and Requests

September 23, 2017

Go Receiver Pointers vs. Values

September 4, 2017

Fixing statsonice.com Latency

September 1, 2017

Showing Schemas in Different Databases

August 26, 2017

Straight Lines

June 2, 2017

Emerson on Intellect

May 29, 2017

Core Metric for Developer Productivity

May 21, 2017

How to Capture a Camera Image With Python

May 7, 2017

Python Has a Ridiculous Number of Inotify Implementations

May 2, 2017

Projects: Gentle-Alerts

April 27, 2017

Creating a New PyPI Release

April 24, 2017

Eva Air USB Ports

April 24, 2017

Projects: Git-Browse

March 18, 2017

Cassandra Compaction Strategies

March 5, 2017

Code Is Like Tissue Paper

January 25, 2017

Seen in a Bathroom Stall at MIT

January 24, 2017

Underused Python Package: Webbrowser

January 21, 2017

Pax ?

January 5, 2017

Golang Review

January 2, 2017

Wadler's Law

December 15, 2016

Tunnel V2

December 8, 2016

MultiPens

December 5, 2016

SSH Tunnel

September 18, 2016

That Time I Was a Whitehat Hacker

September 18, 2016

Comparison of Country and Company GDPs

September 8, 2016

Sketching Science

September 8, 2016

Tech Hiring Misperceptions at Different Companies

July 22, 2016

Calculating Rails Database Connections

June 26, 2016

DevOps Reactions

June 12, 2016

Tuning Postgres

June 9, 2016

Fibonaccoli

June 4, 2016