Using AI to aid color scheme migrations
Recently I found a good use case for AI when migrating my dotfiles to another theme. This is a …
Code is cheap, show me the… what exactly?
There’s this famous Linus quote:
“Talk is cheap, show me the code.”
— Linus Torvalds
But now, you can quite literally talk into your computer, it generates code, and it’s relatively cheap.
We can, of course, extrapolate what Linus said, and talk about whether that code is good or not, or whether it solves the right problems. And that’s what we’re going to discuss here.
I am definitely more productive with AI, and whether I actually like writing code or not, AI will be faster than me in most cases.
It is here to stay, no point in crying about it.
The problem it introduces is that, while it outputs software at unprecedented speeds, its output is also soulless.
Everything looks similar, and feels neither extremely wrong nor just right. It feels like a copy of a copy of a copy of something else - both the code and aesthetics.
It’s almost like mindless robots made it - mainly because they did.
A couple of months ago, the book The Art of Doing Science and Engineering: Learning to Learn by Richard W. Hamming got “famous” on Twitter.
{style=“max-height: 350px”}
I’m a sucker for good books, and the title (and cover) seemed intriguing, so I picked it up. I confess that I haven’t finished reading it yet, but he talks extensively about “style”: how you should develop your own, how to learn it, and how having your own style makes you different from everyone else. At some point the author draws a parallel with painting, which I found quite interesting.
There’s no single quote from that book that would make sense alone, but I found another one, from another book, which might:
“The taste to work on the right problem at the right time and in the right way is the secret of doing significant things.”
— Richard W. Hamming, Methods of mathematics applied to calculus, probability, and statistics
My understanding is that taste (or style), in that context, means some kind of refined judgment or intuition of what’s important and how the important things should be done, and that that taste or style is something you develop gradually.
And I think that when people read your code, or use your software, they can feel it.
I also don’t think that AI-made software has taste, hence why it feels so soulless most of the time.
Taste, as Hamming says, includes working on the right things at the right time. But what if time wasn’t a constraint? What if you could write all the possible solutions for all the possible problems at the same time?
Models can push code way faster than humans could ever possibly do, and I think that this makes the time constraint fade away, maybe even disappear: it could eventually solve all the right problems by sheer brute force.
For now, the remaining constraint is money: running and using LLMs is expensive, as anyone who has used AI coding tools knows.
So, at least for now, we can still apply our intuition on choosing the right problems to solve, and on steering the model to solve the problem in a similar way we would.
If you can do that, you’ll get the productivity benefits of LLMs, while still applying some of your taste to its output.
You can, of course, also apply manual edits in the generated code to better fit what you think is right - I do this all the time, and it works quite well.
I’m not sure how long money will be a constraint - maybe at some point we’ll make a breakthrough discovery that makes running and training models so cheap that nothing really matters anymore.
One could argue that taste is also knowing what shouldn’t be built, e.g., things that are not actually problems, or completely wrong solutions for a valid problem.
In that vein, I think that, maybe, we could still use a human to evaluate what is good and what is garbage, and throw the garbage away.
But then, again: would it even matter if time and money are not constraints anymore?
I think the only thing that brute forcing might not solve is the aesthetics of software, i.e., software that doesn’t feel soulless, and maybe that’s the thing we’re going to be left with: making AI-generated software feel like “regular” software.
I wonder how much people will even care about that, in the end. Probably not that much: just look how every website looks like every other website for years now, mainly because everyone uses the same component library.
If it’s all about tradeoffs, will people prefer cheaper, soulless software now, or expensive, soulful software later?
I don’t know.
I’m guessing that a mix of both.
And if the ratio tips enough towards the soulless side, I think we might, then, finally, be cooked.
“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
— Roy Amara
Oh well, there’s always something else to do.