Published
Weekend Reading — 🎩 Forms and background jobs
This week we attempt a real-life coding exercise, draw our own animations, contrast AI agents with the Semantic Web, fold like a pro, and learn an ancient form of writing.
Andy Baio “birds aren’t real”
Tech Stuff
matt swanson Facts:
“Building a b2b SaaS is basically 95% forms and background jobs so get really fucking good at those”
Making Tanstack Table 1000x faster with a 1 line change Peering behind the abstraction:
A few months back I was working on a Javascript frontend for a large dataset using Tanstack Table. The relevant constraints were:
Up to 50k rows of content Grouped by up to 3 columns Using react and virtualized rendering, showing 50k rows was performing well. But when the Tanstack Table grouping feature was enabled, I was seeing slowdowns on a few thousand rows, and huge slowdowns on 50k rows.
Amazon CodeWhisperer Amazon’s alternative to Copilot
- Free for individual use
- “flag or filter code suggestions that resemble open-source training data”
- “scan your code to detect hard-to-find vulnerabilities, and get code suggestions to remediate them immediately”
- Fine-tuned to use AWS APIs
(I've not used it myself, might be interesting if you can't use Copilot)
Modern HTML email (tables no longer required) I wish, but the holdout is the 20% that use Outlook for Windows. Fortunately, while O4W is getting an updated modern rendering engine that doesn’t require tables, it’s also used by a lot of enterprise customers that are not in any rush to upgrade. Just a reminded, Windows 8 was supported all the way to January this year.
Issue 347016: Support user stylesheets As good a reason as any:
This design choice was made by accounting and legal, not by development. Cannot simultaneously fix and keep job. Therefore, won't fix.
Who invented vector clocks? Lots of “invented by” was just a re-discovery made known by the person who got the most citations:
And this is unsurprising – in computer science there are many cases of things being used for years (because they work!) before we develop the theory that explains why they work.
"technical debt" implies the existence of predatory technical lenders
"hey kid, you wanna write that code using our framework? it'll be easy at first“
everythingishacked/Semaphore "A full-body keyboard using gestures to type through computer vision."
Real-life coding exercise!
Business Side
Retention Benchmarks and Insights From Studying Over 2,100 SaaS Businesses It seems that customer retention is inverse correlated with interest rates:
More than half of SaaS businesses had lower retention in 2022 compared to 2021. A challenging macroeconomic environment meant that subscribers re-assessed and cut their SaaS spend. This is in sharp contrast to 2021 which saw almost 70% of businesses having a higher retention rate in 2021 when compared to 2020.
mhoye 😲
No conversation about risk management is complete without a reminder that the Hindenburg blimp had a smoking lounge.
Machine Thinking
FAIR Animated Drawings This has been blowing up on my Mastodon timeline. Create animations starring your own drawn characters.
"When I tell it it's wrong, ChatGPT tries again and gets more correct!"
No. It learns which answer you stopped asking questions at. It has no idea of correct factually!
Correct for ChatGPT means you are satisfied with the combo of words it put together. That you liked its information-shaped sentence.
That's all. If you tell it it's still wrong, it will try again with different words instead.
You just think it's getting more right because you let it stop when it accidentally is.
AgentGPT ChatGPT is old news, the future is autonomous GPT agents! (See also AutoGPT, Cognosys, and by the time you read this maybe 10 other interesting experiments)
I’m keeping my eye on this, but I’m also keeping my expectations low for now. The semantic web is old enough to drink, and hasn’t delivered the autonomous agents we were all promised as far back as 2001:
This Web of structured data would enable automated assistants (called software agents) to operate on our behalf, autonomously completing tasks, and in the process, greatly simplifying and enriching our online experience
This time is different but is it and how?
Since LLMs have human-like ability to read text, click buttons, etc you don’t need to define all semantics upfront. That avoids the Semantic Web chicken-egg problem — AI agents can use any website today to perform any number of tasks without prior programming.
That agility has a downside — AI agents occasionally make mistakes, and when AI agents are designed to spawn and recurse, so small mistakes quickly build into avalanches.
For my little experiment, I asked the AI agent “get me 3 bars of dark chocolate”. It found 3 different stores where it could buy dark chocolate. And then proceeded to buy 3 bars from each store. That’s 9 total if you’re keeping count (plus 3x delivery fees).
And then it got stuck in an infinite loop …
There are ways we can control this, but adding more specificity for anticipated use cases — that’s just programming by another name — and we’re back to chicken-egg that doomed the Semantic Web 2.0.
One difference though. There are many use cases where the only cost of failed AI agents is that you discard some outputs — research, writing, editing, travel planning, code suggestions, etc fit that category.
When you can easily discard work, then you can start by accepting low quality and continuously improving the tools. One thing today’s imperfect AI agents can rely on — worse is better.
Generative Agents: Interactive Simulacra of Human Behavior An interesting experiment layering LLM to build higher cognitive abilities.
It’s a simulated environment where AI agents get to socialize with each other. The emergent behavior includes planning a Valentine’s Day party and getting other agents to show up.
I wrote a thread that summarizes this, but basically, the AI here consists of a set of processes all using GPT-3.5:
- Layer that creates a memory stream of the agent’s existence in the simulation
- Layer that ranks these memories for importance, to help streamline retrieval
- Layer that combines memories — recursively, 2~3 times a day — to form reflections
- Layer that uses memories + reflections to plan the agent’s day, and recursively down to 5~15 minute intervals
- Layer for spontaneity: “we prompt the language model with these observations to decide whether the agent should continue with their existing plan, or react.”
The simulation also uses GPT-3 to update objects in the environment in response to agent actions
All that layering is not cheap: “substantial time and resources to simulate 25 agents for two days, costing thousands of dollars in token credit and taking multiple days to complete.” So don’t expect to see these NPCs in your favorite game quite yet, but what an interesting proof of concept.
"prompt engineering" - what a sad job. So you spend all day trying to figure out the exact sequence of words that will get the computer to do the thing you want lol
Wait
Oh noe
Large Language Models are Human-Level Prompt Engineers We’re entering the LLM phase where prompt engineers are replaced by GPTs:
We propose an algorithm for automatic instruction generation and selection for large language models with human level performance.
Insecurity
Firefox Rolls Out Total Cookie Protection By Default
Total Cookie Protection works by creating a separate “cookie jar” for each website you visit. Instead of allowing trackers to link up your behavior on multiple sites, they just get to see behavior on individual sites. Any time a website, or third-party content embedded in a website, deposits a cookie in your browser, that cookie is confined to the cookie jar assigned to only that website. No other websites can reach into the cookie jars that don’t belong to them and find out what the other websites’ cookies know about you — giving you freedom from invasive ads and reducing the amount of information companies gather about you.
⭐ None of the Above
Life with a dog is about 90% following each other around and wondering what the other is eating.
Taggart “We had a choice lo those many years ago. If only…”
Lasers were a huge scientific breakthrough and now we use them to play with cats. Computers were also a huge scientific breakthrough and now we use them to look at pictures of cats. In other words, science was made for cats.
Musab KAYA on TikTok New life goal
This USPS facility in Utah does nothing but decipher your bad handwriting I'm old enough to be offended:
Newly-hired keyers go through 55 hours of training before they start the job, and it comes with a crash course on a form of writing that hasn't been taught in schools for more than a decade.
Could Ice Cream Possibly Be Good for You? How is that even a question?
Speaking of, our latest fav is to sandwich plain ice cream between two Korean puffed cereal cookies. The crisp/cream texture is better than any store-bought pre-assembled ice cream sandwich.
Buitengebieden “Meet Bayley, the real life version of Snoopy.. 😊”