Published
Weekend Reading — 👌 Ctrl-Salt-Delete
This week we load up on SVG, pay back tech debt, fight name complexity, learn how to build ChatGPT, and go in search of SF/Mission super burrito.
Tech Stuff
SVG Repo A catalog of 6000+ SVG collections, with over 460K free SVG vectors and icons.
What if writing tests was a joyful experience? 🔥 Love this developer experience:
It’s hard to overstate how powerful this workflow is. To “write a test” you just drop an [%expect] block below some code and it will get filled in with whatever that code prints.
20 Things I've Learned in my 20 Years as a Software Engineer
Learning from those who came before us is instrumental to success, but we often forget an important caveat. Almost all advice is contextual, yet it is rarely delivered with any context. “You just need to charge more!” says the company […]
We invested 10% to pay back tech debt; Here’s what happened
The management gradually started to like this practice too because tech debt would not steal from regular work or cause embarrassingly unnecessary incidents. Plus, this freedom and trust boosted the team spirit. The team was treated like adults, so it behaved accordingly.
Intuitive List Item Transitions with the View Transitions API “I’m using zero libraries here. None are required. That’s pretty cool.”
Taming Names in Software Development Dealing with naming complexity, using naming conventions and molds, and running a name audit:
Good names What is a name? A name is a label, a handle, a pointer in your brain’s memory. A complex idea neatly encapsulated. A name lets you refer to “the economy” , or “dogfooding” mid-sentence without needing a three-paragraph essay to explain the term. If you think of software development as just carving up […]
Never write a commit message again (with the help of GPT-3)
First, each file is summarized independently and in parallel to produce a list of bullet points from OpenAI’s models.
Then, the summarized file changes are summarized in two ways. First, the model is instructed to generate a short, one-line title for the commit. In addition, the model summarized higher-level bullet points to be included in the body of the commit message.
LangChain A toolkit for working with LLMs: prompt templates, abstract API for multiple services, prompt chaining, conversational memory (useful for chat bots), utilities for text splitting and token counting, and more.
Codeium A Copilot alternative, free to use, and yes there's a plugin for Vim.
remix-run/remix Remix 1.11 added defer
, which allows the page to render while still loading data from the server.
The release notes say to not use this feature unless you really need to, so of course I used it on a UI that's not performance critical -- I still have to wait a second or two for the content to load, but the page layout renders quickly and so it feels like less waiting.
Caveat, works when using Cloudflare or Express (eg fly.io), would make no difference when using Lambda (eg Vercel, Netlify).
SwiftOnSecurity “History in pics: Testing prototype Roomba's in 1982. It would take two decades until they could be made small enough to clean under a couch.”
Peoples
The Oatmeal This is from a larger comic about creativity: https://theoatmeal.com/comics/creativity_things
Business Side
Netflix Has Created A Self-Fulfilling Cancelation Loop With Its New Shows Be careful what you measure, you could end up optimizing the wrong thing:
But what’s happened now is that this has happened so often with so many shows, that Netflix has created a self-fulfilling loop with many series that probably could have gone on to become valuable catalogue additions otherwise.”
CNET Is Reviewing the Accuracy of All Its AI-Written Articles After Multiple Major Corrections This is here and not in the AI section, because really it's a problem of how search engine gamification ruins the internet:
All of the articles published under the “CNET Money” byline are very general explainers with plain language questions as headlines. They are clearly optimized to take advantage of Google’s search algorithms, and to end up at the top of peoples’ results pages—drowning out existing content and capturing clicks.
Content farm divert traffic away from high quality sources. They do that by churning out a lot of content, not written by experts, but by content writers on a tight schedule. The writers get their information wherever it's available, often from search results (ie other content farms).
Essentially, the content gets recycled from one content farm to another, until it's medicore at best, sometimes outdated, misleading, or wrong.
Maybe what we need is for AI to accelerate the SEO death spiral, for search to get even worse, so it could get better.
Google vs. ChatGPT told by aglio e olio is saying the same thing:
I don’t want to sound all doom and gloom. If generative AI tools make the internet a worse place for a bit, that will only be because we are used to a certain type of internet that is taxed by big tech and subsidized by ads. If we are inundated with even more spam, and our search result become even more useless, it’ll only mean that we should move past those business models. I am glad OpenAI is held up as a real challenger to Google. The king is dead. I wholeheartedly welcome our new AI overlords.
Machine Thinking
Let's build GPT: from scratch, in code, spelled out. If you want to understand how GPT works. Source code available here: https://github.com/karpathy/nanoGPT.
If you're curious about GPT, I recommend taking some time to watch this video and look through the code. The math and concepts are not trivial, but the code is only 300 lines, not too hard to get a sense of how it works.
Also understand the difference between predictions — quick enough for autocomplete UI — and training, which at this point is beyond the reach of hobbyist rig.
Talking About Large Language Models Just a friendly reminder that LLMs (like ChatGPT) don't know anything or do any thinking, and if they appear so, it's only in the eye of the beholder:
A bare-bones LLM doesn’t “really” know anything because all it does, at a fundamental level, is sequence prediction. Sometimes a predicted sequence takes the form of a proposition. But the special relationship propositional sequences have to truth is apparent only to the humans who are asking questions, or to those who provided the data the model was trained on.
God grant me the agility to adopt the AI tools that will still be relevant in 6 months, the patience to pass on those that won't, and the wisdom to know the difference.
Insecurity
"In retrospect, crowdsourcing the name of the newest plane in their fleet ultimately proved unwise.
Planey McPlaneface'; DROP TABLE flights;-- "
took to the air early Wednesday morning on its maiden flight, shortly before the outages began."https://www.theverge.com/2023/1/11/23549834/usa-flights-grounded-faa-computer-glitch
"With AI, now any idiot can write malware!"
As a security researcher, I can assure you that idiots have been writing malware for quite some time.
Everything Else
Uncle Duke “one can eventually tire of simply crossing the road”
I'm sorry, but you can't always be experiencing a higher volume of calls than average.
That's not how averages work.
Jason Thorne “CTRL - SALT - DELETE. One of the new names for one of our city snow plows”
Spanish Highs, Sierra Nevada “The aurora borealis seen from near Reykavik last night. Taken by our team out there at the moment. Absolutely stunning!”
”Tightly rolled burrito in Seattle proper?” Why is it so hard to find SF/Mission-style super burritos outside the Bay Area:
A long time ago I used to know a taco truck owned by some guys from Oakland operating in Belltown, and when I heard they were from Oakland I asked if they could do a legit super burrito.
They said they wish they could, but sadly could not, but explained why. And they otherwise did a really nice burrito but it still wasn't the same and we both knew it.
ancient catbus “the perfect escalator doesn't exi…”