- Lean into the LLM (permalink)
One thing I have to "retrain" myself to is leaning into the LLMs more. I expect most oldschool programmers will have the same struggle with their "muscle memory" when adopting these new tools.
I already said it (previous journal entry) - if you are not using something like cursor/copilot in Visual Studio (regular or code) to augment Intellisense, you're missing out. That is easy and magical, you get much better autocomplete. And it's not just stuff like common STL patterns, the kind of stuff you'd expect to find on google repeated by a million "SEO optimized" websites. It will surprise you with a deep understanding of what you're trying to do, in context. At least once a day I get autocomplete so exact, from typing so few characters, while implementing complex, novel algorithmic stuff - that I question how TF is the LLM reading my mind.
But I still have to retrain myself on when and how to "reach out" to LLMs.
A few things I noticed:
1) LLMs are not better than me at creativity. This is not surprising! I am an expert in my fields, advancing the state of the art, LLMs at best can be expected to be mediocre, by construction. Don't use LLMs for creativity - especially if you are an expert (very different if you're doing stuff when mediocrity is already better that your current skills or where mediocrity is all that's needed).
2) You don't need to ask LLMs how to do something - i.e. for documentation - if you are immediately planning to use that knowledge to write code. It's much more efficient in these situations to use a LLM-enabled editor and ask the LLM directly to do a given change or implement a given thing.
It will most likely get it right, working directly in code makes them better, it's faster etc. This is the first thing that I do wrong too often.
3) LLMs are better than you think at mechanical transformation and rote stuff. I still have a lot of skepticism because of the risk of hallucinations, but that's not how LLMs work! If you ask to do some rote transformations, they have the full context on what you're asking, and it's unlikely they will hallucinate. LLMs hallucinate when pushed towards creativity! For example... take a table of numbers and format it for a given language using a given datastructure? Will be done perfectly! Translate between languages? Will work much better than you expect. I translated long, complex GLSL shaders (stuff from IQ...) to python/jax/numpy by just asking an LLM to do so, it worked perfectly.
Yesterday I was "translating" some C++ code to C (think: adding typedef to all structs, removing member functions etc), after going at it a bit by hand I reminded myself that I should have used an LLM instead... and indeed, the machine did the translation PERFECTLY. In fact, even better than my code, because I'm lazy, I'm human, I would take shortcuts and try to type less etc, while the LLM is infaticabile.
4) Anything that you can explain step-by-step, like you would to a junior programmer, and then it "just" needs to be implemented, an LLM can likely get right. Certainly in Python or Javascript, a bit less so in C++ but not because it does not know how to code there well, but because everyone has its own style of C++ and the LLM by itself is likely to write code in ways you might not like.
Btw - I imagine that using "projects" to give LLMs more context would "fix" that issue (and that is one reason why - whilst I found chat-like code-gen in C++ from scratch to be relatively unimpressive, I am constantly amazed by the quality of LLM-as-intellisense in C++), and you can also fed these projects documentation for specific frameworks you're using etc... but I'm not there yet in my daily work.
There's a lot to learn!
Sat, 29 Mar 2025 20:06:31 -0700
- Vibecoding! (permalink)
I see people in my circles (i.e. hardcore AAA/system/graphics devs) not understanding this "vibecoding" idea - so let me share my 2c.
First of all. Have you tried to code with LLMs? And by that I don't mean something like: open chatgpt, ask to make a shadertoy/raytracer/whatever from scratch, laugh at the results. I mean, really tried - copilot (VisualStudio or VSCode) or cursor, as a daily driver in python or C++ etc, replacing intellisense. Have you tried creating projects, and giving context and documentation to the LLM? If not - start there. Mere intellectual curiosity should mean we try to understand new technologies, right?
Now, that said - let me shock you. I think "vibecoding" (the idea of letting an LLM code with little surpervision) is incredibly interesting!
And I don't mean just to write small utilities and throw-away stuff - where it's absolutely great, but as a legitimate way of programming even small commercial applications and games.
Yes, it will make something relatively crappy. And yes, obviously, you, an experienced programmer, will do a ton better. In my experience, LLMs are circa as good as having a very diligent, tireless but not particularly bright intern.
Anything that any freshmen programmer would reasonably be expected to be able to write, they will, correctly. If something requires true expertise or true creativity, it often becomes easier to write stuff yourself (and that's why LLMs-as-intellisense are much more practical for complex stuff than chat-based LLM, btw - they just save you a ton of typing).
But there is a TON of programming in that area! Almost all experiments in new websites, small apps, even games, can be done that way. Remember when the web (1.0) was actually creative? Stuff like the "one million dollar page" - or even stuff like MySpace... Or... instagram... snapchat...
tinder... Well, pretty much everything, really, in percentage.
Yes! They won't write the Call of Duty engine. Yes, lots of people using LLMs are not "real programmer" nor "real gamedevs". So what? There was a time when we all were not "real programmers". I started writing "code" when I was single-digit-aged, on a C64. Did I know what I was doing? Hell no! And what's the problem with that? With tinkering?
I'd make you a bet. In the next, say 5 years, there will be out there a one-billion-dollar company that gets sold and that started as a "vibecoding" thing from someone who knew nothing and learned along the way.
And that company likely will end up hiring a bunch of programmers, once it needs to scale, that really know their deal - that's not a problem.
I understand that there is a lot of hype around "AI" these days, but it's not intelligent to react to the hype with an equal amount of hate.
It's also interesting to see that this idea of "casual coding" has always been a goal, throught the history of computer science. Spreadsheets, Smalltalk & 4GLs, HyperCard, Toolbox/Flash, visual programming "components", nodegraphs, nocode et al - for decades we have tried to make code "natural", blur it away (which is different BTW from teaching programming - e.g. basic, scratch - or using programming to teach about reasoning - e.g. logo)... and now we actually can, we are... scared?
Yes, people make a mess with nodegraphs, they are not "real code" and often need to be rewritten (and they are not a great interface for what they try to do - but that's another can of worms) - but they enable non-progammers to prototype ideas... and for that we like them, right? What's up with hating now that people can do the same with natural language and LLMs?
p.s. & btw, as much as I hate hype too - I also think it's to some degree, inevitable, as it's both human and to some degree rational economics, as it's not that you can know exactly who's going to win and perfectly allocate capital only on the few companies that will make it in the end.
p.p.s. interestingly, a side-effect of this "vibecoding" thing, if it ever becomes big enough to really make a dent on how certain ideas are explored - is that it will be much more "decentralized" and "anarchist" and all the other things (some) nerds like. Before, you needed myspace to create your funky little corner of the internet, not all would learn HTML and publish on geocities. This thing, the fact that LLMs can reasonably use any library that is popular enough or that at least is conceptually mainstream enough to be "understood" with some documentation, implies that big platforms and vendors have less of a hold.
Sat, 22 Mar 2025 17:32:35 -0700
- We are F. (permalink)
Listen, I'd love not to journal about politics every time, but this is the time we live in and this is what I think about.
I hate people throwing out the word fascism casually. Words are important.
Abusing them devalues their meaning.
The executive order against Perkins Coie is fascist.
Yes, it's not the first illegal executive action that was done clearly knowing it was illegal and it was going to be stopped by judges, but in all other cases there was still a veneer of what could be called cunning politics - the administration had the power to go through the congress (unquestionably, let's not kid ourselves) but went the illegal way both for showmanship and expedience and because they liked the idea of going to the supreme court for a chance of getting the presidential powers expanded.
Win-win-win.
This Perkins Coie order is different. It's something that I don't believe the congress would have passed. It's something done knowing that most of the damage is in the order itself, even if it gets overturned, it doesn't matter. In other words, it's the president enacting absolute power over the most fundamental of the democratic counterbalances. It not only destroys a private law firm, but it muzzles all the others - which in fact are staying silent, like good lap dogs.
Yes, this is fascist. There is no other way of characterizing it. And I'm sure everyone knows, it's not that these people are so delusional that they don't get it. They know, and they love it.
Absolute power.
Wed, 19 Mar 2025 11:03:48 -0700