Two years ago, I saw myself as a really good Python engineer. Now I'm building native mobile apps, desktop apps that talk to Slack, APIs in Go, and full web apps in React, in hours or days!
It feels like I've got superpowers. I love it. I feel productive, fast, creative. But at night, there's this strange feeling of sadness. My profession, my passion, all the things I worked so hard to learn, all the time and sacrifices, a machine can now do most of it. And the companies building these tools are just getting started.
What does this mean for the next generation of engineers? Where's it all heading?
Do you feel the same?
The reason you can use these tools so effectively across native, mobile, Go, React etc is that you can apply most of what you learned about software development as a Python engineer in these new areas.
The thing LLMs replace is the need for understanding all of the trivia required for each platform.
I don't know how to write a for loop in Go (without looking it up), but I can write useful Go code now without spinning back up on Go first.
I still need to conceptually understand for loops, and what Go is, and structured programming, and compilers, and build and test scripts, and all kinds of other base level skills that people without an existing background in programming are completely missing.
I see LLMs as an amplifier and accelerant. I've accumulated a huge amount of fuzzy knowledge across my career - with an LLM I can now apply that fuzzy knowledge to concrete problems in a huge array of languages and platforms.
Previously I would stay in my lane: I was fluent in Python, JavaScript and SQL so I used those to solve every problem because I didn't want to take the time to spin up on the trivia for a new language or platform.
Now? I'll happily use things like Go and Bash and AppleScript and jq and ffmpeg and I'm considering picking up a Swift project.
This is difficult to express because I too have enjoyed using an LLM lately and have felt a productivity increase, but I think there is a false sense of security being expressed in your writing and it underlies one of the primary risks I see with LLMs for programming.
With minor exceptions, moving from one language to another isn’t a matter of simple syntax, trivia, or swapping standard libraries. Certainly, expert beginners do espouse that all the time, but languages often have fundamental concepts that they’re built that and need to be understood in order to be effective with them. For example: have you ever seen a team move from Java to Scala, js to Java, or C# to Python - all of which I’ve seen - where the developers didn’t try to understand language they were moving to? Non-fail, they tried to force the concepts that were important to their prior language, onto the new one, to abysmal results.
If you’re writing trivial scripts, or one-off utils, it probably doesn’t build up enough to matter, and feels great, but you don’t know what you don’t know, and you don’t know what to look for. Offloading the understanding of the concepts that are important for a language to an LLM is a recipe for a bad time.
> but languages often have fundamental concepts that they’re built that and need to be understood in order to be effective with them
I completely agree. That's another reason I don't feel threatened by non-programmers using LLMs: to actually write useful Go code you need to figure out goroutines, for React you need to understand the state model and hooks, for AppleScript you need to understand how apps expose their features, etc etc etc.
All of these are things you need to figure out, and I would argue they are conceptually more complex than for loops etc.
But... don't need to memorize the details. I find understanding concepts like goroutines to be a very different mental activity to memorizing the syntax for a Go for loop.
I can come back to some Go code a year later and remind myself how goroutines work very quickly, because I'm an experienced software engineer with a wealth of related knowledge about concurrency primitives to help me out.
> But... don't need to memorize the details. I find understanding concepts like goroutines to be a very different mental activity to memorizing the syntax for a Go for loop.
That's one of the argument that seems to never make any sense to me. Were people actually memorizing all these stuff and are now happy that they don't have too? Because that's what books, manuals, references, documentations, wiki,... are here for.
I do agree with you both that you need to understand concepts. But by the time I need to write code, I already have a good understanding of what the solution is like and unless I'm creating the scaffolding of the project, I rarely needs to write more than ten lines at time. If I need to write more, that's when I reach for generators, snippets, code examples from the docs, or switch to a DSL.
Also by the time I need to code, I need factual information, not generated examples which can be wrong.
What are your thoughts on humans learning from LLM output ?
I’ve been encouraging the new developers I work with to ensure they read the docs and learn the language to ensure the LLM doesn’t become a crutch, but rather a bicycle.
But it occurred to me recently that I probably learned most of what I know from examples of code written by others.
I’m certain my less experienced colleagues are doing the same, but from Claude rather than Stack Overflow…
I agree Simon. With the little time I have these days for side projects, LLMs have become my new best friends. I'm more worried about the future of my students than my own. I'm sure I'll grow old gracefully alongside the machines that helped me graduate and build a career.
I run Python workshops on weekends, and honestly, I'm speechless at the things students are already doing. But is it realistic to encourage them to study computer science in a year or two, when it's still a 4-year degree? By the time they start uni, LLMs will be 100 times more powerful than they are today. And in 6 years?
> With minor exceptions, moving from one language to another isn’t a matter of simple syntax, trivia, or swapping standard libraries.
I think this is not true for most programming languages. Most of the ones we use today have converged around an imperative model with fairly similar functionality. Converting from one to another can often be done programmatically!
It really makes me wonder what is taking us so long to translate all these C python modules like Numpy and SciPy into something that works with one of the GIL-free python variants out there.
Translation is the one thing everyone expects AI to be good at. And while I’m not an expert in above posts languages so I can’t review, I’d be willing to bet there’s not obvious mistakes that could end up being pretty significant. The same thing happens with language I’m an expert it.
It’s odd the post describes what it’s doing as creating something new - that’s only true in the most literal (intellectually dishonest) sense.
> moving from one language to another isn’t a matter of simple syntax, trivia, or swapping standard libraries. [...] Java to Scala, js to Java, or C# to Python
I find it kind of funny (or maybe sad?) that you say that, then your examples are all languages that basically offer the same features and semantics, but with slightly different syntax. They're all Algol/C-style languages.
I'd understand moving from Java to/from Haskell can be a bit tricky since they're actually very different from each other, but C# to/from Java? Those languages are more similar than they are different, and the biggest changes is just names and trivia basically, not the concepts themselves.
LLMs aren't good engineers. They're mediocre, overengineering, genius, dumb, too simplistic, clean coding, overthinking, underthinking, great but terrible engineers rolled into one. You can get them to fight for any viewpoint you want with the right prompt. They're literally everything at once
To get good code out of them, you have to understand what good code is in the first place and lead them to it
My experience is iterating through 5 potential options being like "nope, that wont work, do it this way.. nope adjust that.. nope add more of this.. oh yeah that's great!"
The "that's great!" made me nervous that an LLM was capable of producing code as good as me until I realized that I threw out most of what it gave.. and someone less experienced would've taken the first suggestion without realizing its drawbacks
LLMs are search engines not robots you can automate things with search engine results but you can also accidentally automate NSFW being delivered to customers. Use the right tools correctly.
This is 100% the rational response I'd give to OP, but I can't deny I feel a bit of the same unease. It's mostly that this technology is really... weird, I think. I struggle to put that into words if I'm being honest.
Have you seen non-technical people use LLMs to build things? It's usually a lot slower or a disaster. You don't necessarily need to be technical perhaps in time, but you still need to be precise. Just knowing about HTML to the level we do allows us to embed text into it so the LLM can reason more sharply about it, etc.
Being technical still is an advantage, in my opinion.
> My profession, my passion, all the things I worked so hard to learn, all the time and sacrifices, a machine can now do most of it
Can it? It doesn't have experience, it doesn't have foresight nor hindsight, it cannot plan and it just follows the instructions, regardless if the instructions are good or bad.
But you, as a human, have taste, ideas, creativity and a goal. You can convince others of your good idea, and can take things into account that an LLM can only think about if someone tells it to think about it.
I'm not worried about programming as a profession disappearing, but it is changing, as we're moving up the ladder of abstractions.
When I first got started professionally with development, you could get a career in it without knowing the difference between bits/bytes, and without writing a line of assembly, or any other low-level details. Decades before that, those were base-level requirements for being able to program things, but we've moved up the abstraction ladder.
Now we're getting so far up this ladder, that you don't even need to know a programming language to make programs, if you're good enough at English and good enough at knowing what the program needs to do.
Still, the people who understand memory layout, and assembly, and bits and bobs will always understand more of what's happening underneath than me, and probably be able to do a "better" job than me when that's needed. But it doesn't mean the rest of the layers above that are useless or will even go away.
I’m sure the handicrafts laborers prior to the industrial revolution felt the same way.
However, they also likely had little education, 1-2 of their children probably died of trivial illnesses before the age of 10, and they lived without electricity, indoor plumbing, running water or refrigeration.
Yes, blacksmithing your own tools is romantic (just as hand writing python code), but somehow I’m guessing our ancestors will be better off living at higher layers of abstraction. Nobody will stop them from writing python code themselves, and I’m sure they’ll do it as a fun hobby like blacksmithing is today.
Their job was to shovel coal into the firebox of a steam locomotive to keep the engine running. It required a lot of skill to keep the fire at just the right intensity.
When diesel and electric trains came in, that role disappeared.
my brain read "before the age of 10" as "before the age of 100" haha and wondered if llms are leading us to the point where we look back and realize dying before 100 is truly young??
Absolutely feel the same way. I have been writing software professionally for over 20 years and truly love the craft. Now that I am using Claude Code basically 100% of the time my productivity has certainly increased, but I agree there is a hollowness to it - where before the process felt like art it now feels like industrialized mass manufacturing. I’m hoping that I can rediscover some of what has kept me so fascinated with software in this new reality, because quite a bit of the joy has indeed been stripped away for me.
Yeah there are a lot of things in programming that I don’t enjoy, that I can now avoid doing, like learning how to do the same technical thing in a different language/framework.
But the stuff I enjoy doing is something I don’t delegate to the agent. I just wonder how long the stuff I enjoy can be delegated to an agent who will do it cheaper and faster.
Till theres an outage and the AI cannot help because they got rid of all the technical people or worse it generates sloppy underperforming code. You will continue to need devs for a good minute.
For this you lose a day of llm usage? Big deal. Don’t mean to pick on you at all but I also saw you with a comment above along the lines of ‘but what if the llm includes nsfw stuff?’ which again seems to be clutching at straws since we all still at least review whats being added.
I feel since the worst criticisms of llms in this thread boil down to down to ‘llm might be offline, might include nsfw stuff and might lead to non experts coding with poor results’ which are such weak criticisms that it’s is a damn good compliment to just how good they’ve gotten isn’t it?
I think it’s yet another layer of abstraction on top of complex ideas. There will always be room for people working on the infrastructure, tools, maintenance, and operations side of things. The biggest danger is if you stop learning because you trust AI to just do it.
I’ve been (carefully) using AI on some AWS infrastructure stuff which I don’t know about. Whenever it proposes something, I research and learn what it’s actually trying to do. I’ve been learning quickly with this approach because it helps point me in the right direction, so I know what to search for. Of course, this includes reviews and discussions with my colleagues who know more, and I frequently completely rewrite what AI has done if it’s getting too complex and hard to understand.
The important thing is not allowing AI to abstract away your thought process. It can help, but on my terms
Feeling the same, I just started a dialectic blog, with ChatGPT arguing with itself based on my prompt. It can have some biting words. I also have been using Suno AI to make a song with each post; while not perfect, it certainly allows for totally non-musical people like me to have something produced that can be listened to.
It is similar to the days of first search engines - suddenly you didn't have to remember everything about your language/library or search multiple files.
You lose some, you gain some.
Although I'm not happy with code quality/style produced in most cases, or duplications, lack of refactorings to make the code look better/smaller.
Yes, software value appears to only exist for owners of products. You should pursue ownership of something. You can write anything, even a different type of Claude code!
Terminal + Claude Code + a project folder
it's all you need! Who knew?!
I never really enjoyed using full blown ide's anyway, setting them up was a chore, configuring QT to cross compile for various platforms was a project in itself, I always found that using a text editor and terminal the most logical combination for editing. Adding another terminal window with claude code to perform my requests while i debug, edit, test, review, i feel like i have graduated from coder-editor to project leader and there are no employee associated problems to deal with. All or my shelved side projects are complete as i wanted them and all in just a few months since March when Claude went big.
Yes, IDEs of the future will not look like those we have today. I too started with Cursor and similar VS Code enhanced IDEs. And ended up using Claude Code. And realised my terminal is more important for me now. So I migrated from iTerm to Ghostty (faster, lighter, more modern), Tmux and Tmuxinator, and NeoVim! Because I just cat/bat files and occasionally edit a file, if at all. Claude Code does the heavy lifting, I write specs and prompts, in NeoVim or Emacs. And I am loving my workflow.
I too use Claude Code for more then just generating code. Whenever I need to rewrite a config file (for zsh, neovim, ghostty, etc), I start Claude code and assign the task to do it. It will do the changes and even refactor my settings file in a few minute.
Lastly, I use it to ask questions about my codebase, refactor code, let it document my code, even make meaningful commits.
It’s a feature of the ide but pretty much all of them have an option for ais to only change code via a merge and review process. It’s not the ai changing directly, it’s heres a diff; accept, change, discard process after the prompt. Past there the ides usually have a local history and yiu can also use git.
A year or two ago I had a pretty simple thought. The idea was that LLMs make a great assistant for a skilled developer, a bad replacement for a skilled developer, and a dangerous assistant for an unskilled developer. That idea has mostly seemed to hold true as I have gotten more experience with LLMs.
I think I would revise it now to allow that LLMs can be really useful teachers for unskilled developers, however I don’t get the sense that that’s how they’re being used in many cases. It seems that it’s more common that unskilled developers using LLMs just vibe code their way through issues that they run into, never really gaining insight into why the thing they’re working on failed, and just push through until it seems like it isn’t failing anymore.
And that more or less reinforces my idea that they’re dangerous assistants in those cases where the developer is unskilled. It’s pretty much inevitable that this will introduce problems that make it through the process of creation not only unnoticed, but problems that the developer is incapable of understanding even if they are noticed.
I've been doing this for little utilities and it's been great.
I created one utility to show my launchctl/launchd tasks with status indicators for what is loaded/unloaded/running/failed (very much like the menu icon you get for OrbStack) and it works great. With claude it only took a couple of hours to get workig and dial it in the way I wanted.
Yeah, I've built a small iOS app and wordpress plugin for myself and love them. I'm curious if more people will do this in the near future, should we all be sharing our code on github?
Love this conclusion about how reduced friction from LLM assistance helps with that last 20% of actually shopping a side project:
> The most exciting thing about this entire journey for me is not the app I built, but that I am now able to scratch my coding itch and ship polished side projects again. It's like I found an extra 5 hours every day, and all it cost me was $200 a month.
> The main issue here is you can’t acquire expertise by using llm to code for you.
Agreed, but I wish I had it as a teacher while learning. The amount of help my interns need from me has reduced by at least 50%, and what remains is the non-trivial stuff which is completely worth my time to coach and mentor them
I wonder how common this is. Personally, looking at code examples can be helpful, but the vast majority of my learning comes from doing, failing, trying a different approach, succeeding, rinsing and repeating.
I also haven't had much luck with getting llms to generate useful code. I'm sure part of that is the stack I am using is much less popular (elixir) than many others, but I have tried everything even the new phoenix.new , and it is only about an 80 to 90% solution, but that remaining percentage is full of bugs or terrible design patterns that will absolutely bite in the future. In nearly everything I've tried to do, it introduces bugs and bug hunting those down is worse to me than if I just did the work manually in the first place. I have spent hours trying to coach the AI through a particular task, only to have the end solution need to be thrown away and started from scratch.
Speaking personally, My skills are atrophying the more I use the AI tools. It still feels like a worthwhile trade-off in many situations, but a trade-off it is
I also have not had much luck with LLMs when it comes to anything with substantial complexity.
Where I’ve found them best is for generating highly focused examples of specific APIs or concepts. They’re much better at that, though hallucinations still show up from time to time.
And at the bottom it’s revealed that this costs $200/month. I have trouble convincing myself to give Autodesk $50/month and I need that software for my primary hobby.
And none of these AI companies are profitable. Imagine how much it will cost or how much it will be enshittified when the investors come looking for their returns.
If everyone whose code illegally trained these models won a copyright lawsuit against Claude it would suddenly not be so good at writing swift code.
Do we really want to bet on Disney losing their AI lawsuit?
Honestly I realize my comment is not adding much to the discussion but the AI fatigue is real. At this point I think HN and other tech forums would do well to ban the topic for posts like this.
Imagine if we were upvoting stories about how people are getting lots of coding done easier with Google and StackOverflow. It would be rightfully ridiculed as vapid content. Ultimately that’s what this blog post is. I really don’t care to hear how yet another dingus “programmer” is using AI to do their hobby/job for them.
> And at the bottom it’s revealed that this costs $200/month.
I started with the pay-as-you go plan; I'm currently using the Claude Pro plan at $20/month which is great for my use case.
> And none of these AI companies are profitable. Imagine how much it will cost or how much it will be enshittified when the investors come looking for their returns.
I suspect investors will give AI companies a lot of runway. OpenAI went from $0 to over $10 billion in revenue in less than 3 years. I know that's not a profit but it bodes well for the future.
(As an aside, it took Microsoft 22 years to reach $10 billion in revenue.)
Anthropic went from $0 in 2021 to over $4 billion in about 3 years.
In comparison, it took Twitter about eleven years after its founding in 2006 to become profitable in 2017. And for much of that time, Twitter didn't have a viable business model.
I don't think investors are concerned.
Regarding lawsuits, I'm sure the AI companies will win some and lose some. Either way, after all is said and done, there will be settlements and agreements. Nothing is going to stop this train.
But—chalk one up for Anthropic for winning their case, Meta getting a favorable ruling and all the rest [1].
> won a copyright lawsuit against Claude it would suddenly not be so good at writing swift code.
I suspect Claude is going to get really good at writing Swift--Anthropic is working with Apple [2].
> …but the AI fatigue is real
You might want to get off the internet if you're already suffering from AI fatigue; things are just getting started.
Google's gemini-cli offers the most generous current free tier, 2500 daily requests. I'd recommend checking it out, since mostly everything said about Claude Code applies to it too
Buy things that make you more productive. Seriously. Assuming you’re not in some horrible trap of no money to buy things that improve your life you’re being ridiculous by not doing it.
Also i code all day and have yet to hit the $10/month cap on claude that the jetbrains library offer.
> And at the bottom it’s revealed that this costs $200/month.
So, firstly, the limits on the $100/month plan are pretty high in my experience. I do hit them, but it takes a lot.
> I have trouble convincing myself to give Autodesk $50/month and I need that software for my primary hobby.
Before I used Claude Code, I absolutely would have agreed with you, $100–$200 every month is just a ridiculous amount for any software.
But... I don't know, I just think Claude Code really is that good.
> Imagine if we were upvoting stories about how people are getting lots of coding done easier with Google and StackOverflow.
You know, something I've thought about before (as in, before LLMs) is just how hard (impossible?) it would be for me to program without an internet connection. I need to be able to look stuff up.
I can absolutely imagine that if I was a programmer before widespread internet use, and the internet launched with things like Google and lots of resources out of the gate... yeah, that would be revelatory! I'm not saying AI is necessarily the same, but it's something to think about.
> Imagine how much it will cost or how much it will be enshittified when the investors come looking for their returns.
Indeed but one potential saving grace is the open sourcing of good models (eg. Meta’s Llama). If they continue to open source competitive models, we might be able to stave that off.
I write CLI tools with LLMs all the time. I even have a custom Claude Project that teaches the LLM to use inline script dependencies with uv so I can "uv run script.py" without first having to install anything else: https://simonwillison.net/2024/Dec/19/one-shot-python-tools/
A CLI tool is a native binary that I can run directly via e.g. `/path/to/binary`. If I need to use `uv run ...` to execute something, then that thing isn't really a CLI tool, it's an interpreted script.
I've been doing this recently; I don't really know swift, and I wanted to see how well I could use almost 100% LLM to do my coding. For the most part, I have been able to get a pretty substantial app going, but once it gets to a certain size, things start to become hairy.
For instance, I am adding bluetooth capabilities to my app, and the bluetooth manager can be quite large, to the point where the LLM context window starts to become an issue.
When it gets to that point, the LLM can start making suggestions that don't make sense; it's starts forgetting what you had already done, it can start making architectural suggestions that contradict what you had been doing before. So if you follow them, you end up in this weird state mired in conflicting decisions. Almost like design by committee without the committee.
It's important to recognize this when it's happening rather than just blindly following what it suggests, so some expertise at engineering is still necessary even though the LLM can write okay Swift code.
Maybe you should ask the LLM to write concise documentation for the functions that it generates (or ask it afterwards). Then use that documentation in the context-window instead of the actual, more lengthy code. Compartmentalize your code into modules/functions with a clearly defined boundary (api).
Two years ago, I saw myself as a really good Python engineer. Now I'm building native mobile apps, desktop apps that talk to Slack, APIs in Go, and full web apps in React, in hours or days!
It feels like I've got superpowers. I love it. I feel productive, fast, creative. But at night, there's this strange feeling of sadness. My profession, my passion, all the things I worked so hard to learn, all the time and sacrifices, a machine can now do most of it. And the companies building these tools are just getting started.
What does this mean for the next generation of engineers? Where's it all heading? Do you feel the same?
The reason you can use these tools so effectively across native, mobile, Go, React etc is that you can apply most of what you learned about software development as a Python engineer in these new areas.
The thing LLMs replace is the need for understanding all of the trivia required for each platform.
I don't know how to write a for loop in Go (without looking it up), but I can write useful Go code now without spinning back up on Go first.
I still need to conceptually understand for loops, and what Go is, and structured programming, and compilers, and build and test scripts, and all kinds of other base level skills that people without an existing background in programming are completely missing.
I see LLMs as an amplifier and accelerant. I've accumulated a huge amount of fuzzy knowledge across my career - with an LLM I can now apply that fuzzy knowledge to concrete problems in a huge array of languages and platforms.
Previously I would stay in my lane: I was fluent in Python, JavaScript and SQL so I used those to solve every problem because I didn't want to take the time to spin up on the trivia for a new language or platform.
Now? I'll happily use things like Go and Bash and AppleScript and jq and ffmpeg and I'm considering picking up a Swift project.
This is difficult to express because I too have enjoyed using an LLM lately and have felt a productivity increase, but I think there is a false sense of security being expressed in your writing and it underlies one of the primary risks I see with LLMs for programming.
With minor exceptions, moving from one language to another isn’t a matter of simple syntax, trivia, or swapping standard libraries. Certainly, expert beginners do espouse that all the time, but languages often have fundamental concepts that they’re built that and need to be understood in order to be effective with them. For example: have you ever seen a team move from Java to Scala, js to Java, or C# to Python - all of which I’ve seen - where the developers didn’t try to understand language they were moving to? Non-fail, they tried to force the concepts that were important to their prior language, onto the new one, to abysmal results.
If you’re writing trivial scripts, or one-off utils, it probably doesn’t build up enough to matter, and feels great, but you don’t know what you don’t know, and you don’t know what to look for. Offloading the understanding of the concepts that are important for a language to an LLM is a recipe for a bad time.
> but languages often have fundamental concepts that they’re built that and need to be understood in order to be effective with them
I completely agree. That's another reason I don't feel threatened by non-programmers using LLMs: to actually write useful Go code you need to figure out goroutines, for React you need to understand the state model and hooks, for AppleScript you need to understand how apps expose their features, etc etc etc.
All of these are things you need to figure out, and I would argue they are conceptually more complex than for loops etc.
But... don't need to memorize the details. I find understanding concepts like goroutines to be a very different mental activity to memorizing the syntax for a Go for loop.
I can come back to some Go code a year later and remind myself how goroutines work very quickly, because I'm an experienced software engineer with a wealth of related knowledge about concurrency primitives to help me out.
> But... don't need to memorize the details. I find understanding concepts like goroutines to be a very different mental activity to memorizing the syntax for a Go for loop.
That's one of the argument that seems to never make any sense to me. Were people actually memorizing all these stuff and are now happy that they don't have too? Because that's what books, manuals, references, documentations, wiki,... are here for.
I do agree with you both that you need to understand concepts. But by the time I need to write code, I already have a good understanding of what the solution is like and unless I'm creating the scaffolding of the project, I rarely needs to write more than ten lines at time. If I need to write more, that's when I reach for generators, snippets, code examples from the docs, or switch to a DSL.
Also by the time I need to code, I need factual information, not generated examples which can be wrong.
What are your thoughts on humans learning from LLM output ?
I’ve been encouraging the new developers I work with to ensure they read the docs and learn the language to ensure the LLM doesn’t become a crutch, but rather a bicycle.
But it occurred to me recently that I probably learned most of what I know from examples of code written by others.
I’m certain my less experienced colleagues are doing the same, but from Claude rather than Stack Overflow…
I agree Simon. With the little time I have these days for side projects, LLMs have become my new best friends. I'm more worried about the future of my students than my own. I'm sure I'll grow old gracefully alongside the machines that helped me graduate and build a career.
I run Python workshops on weekends, and honestly, I'm speechless at the things students are already doing. But is it realistic to encourage them to study computer science in a year or two, when it's still a 4-year degree? By the time they start uni, LLMs will be 100 times more powerful than they are today. And in 6 years?
Any advice?
> With minor exceptions, moving from one language to another isn’t a matter of simple syntax, trivia, or swapping standard libraries.
I think this is not true for most programming languages. Most of the ones we use today have converged around an imperative model with fairly similar functionality. Converting from one to another can often be done programmatically!
You can write Fortran in any language!
How about converting a rust library into Haskell?
https://ghuntley.com/oh-fuck/
That's from 6 months ago and the tooling today is almost night and day in terms of improvements.
It really makes me wonder what is taking us so long to translate all these C python modules like Numpy and SciPy into something that works with one of the GIL-free python variants out there.
Where can I see the code?
Also:
> This wasn't something that existed; it wasn't regurgitating knowledge from Stackoverflow. It was inventing/creating something new.
Didn't he expressly ask the LLM to copy an established, documented library?
Yes, but in another language. It wasn't regurgitating Haskell code it had seen before.
Translation is the one thing everyone expects AI to be good at. And while I’m not an expert in above posts languages so I can’t review, I’d be willing to bet there’s not obvious mistakes that could end up being pretty significant. The same thing happens with language I’m an expert it.
It’s odd the post describes what it’s doing as creating something new - that’s only true in the most literal (intellectually dishonest) sense.
> moving from one language to another isn’t a matter of simple syntax, trivia, or swapping standard libraries. [...] Java to Scala, js to Java, or C# to Python
I find it kind of funny (or maybe sad?) that you say that, then your examples are all languages that basically offer the same features and semantics, but with slightly different syntax. They're all Algol/C-style languages.
I'd understand moving from Java to/from Haskell can be a bit tricky since they're actually very different from each other, but C# to/from Java? Those languages are more similar than they are different, and the biggest changes is just names and trivia basically, not the concepts themselves.
100%.
LLMs aren't good engineers. They're mediocre, overengineering, genius, dumb, too simplistic, clean coding, overthinking, underthinking, great but terrible engineers rolled into one. You can get them to fight for any viewpoint you want with the right prompt. They're literally everything at once
To get good code out of them, you have to understand what good code is in the first place and lead them to it
My experience is iterating through 5 potential options being like "nope, that wont work, do it this way.. nope adjust that.. nope add more of this.. oh yeah that's great!"
The "that's great!" made me nervous that an LLM was capable of producing code as good as me until I realized that I threw out most of what it gave.. and someone less experienced would've taken the first suggestion without realizing its drawbacks
> To get good code out of them, you have to understand what good code is in the first place and lead them to it
That's a great way of putting it. Completely agree.
Completely agree. And along the way, my personal knowledge of bash has increased a lot, for example, just because I’m using it a lot more
LLMs are search engines not robots you can automate things with search engine results but you can also accidentally automate NSFW being delivered to customers. Use the right tools correctly.
This is 100% the rational response I'd give to OP, but I can't deny I feel a bit of the same unease. It's mostly that this technology is really... weird, I think. I struggle to put that into words if I'm being honest.
This way of putting it resonates with me: unlocking the value of fuzzy knowledge.
Have you seen non-technical people use LLMs to build things? It's usually a lot slower or a disaster. You don't necessarily need to be technical perhaps in time, but you still need to be precise. Just knowing about HTML to the level we do allows us to embed text into it so the LLM can reason more sharply about it, etc.
Being technical still is an advantage, in my opinion.
Not really a criticism of llms though. Just a natural result of more accessibility to programming.
> My profession, my passion, all the things I worked so hard to learn, all the time and sacrifices, a machine can now do most of it
Can it? It doesn't have experience, it doesn't have foresight nor hindsight, it cannot plan and it just follows the instructions, regardless if the instructions are good or bad.
But you, as a human, have taste, ideas, creativity and a goal. You can convince others of your good idea, and can take things into account that an LLM can only think about if someone tells it to think about it.
I'm not worried about programming as a profession disappearing, but it is changing, as we're moving up the ladder of abstractions.
When I first got started professionally with development, you could get a career in it without knowing the difference between bits/bytes, and without writing a line of assembly, or any other low-level details. Decades before that, those were base-level requirements for being able to program things, but we've moved up the abstraction ladder.
Now we're getting so far up this ladder, that you don't even need to know a programming language to make programs, if you're good enough at English and good enough at knowing what the program needs to do.
Still, the people who understand memory layout, and assembly, and bits and bobs will always understand more of what's happening underneath than me, and probably be able to do a "better" job than me when that's needed. But it doesn't mean the rest of the layers above that are useless or will even go away.
I’m sure the handicrafts laborers prior to the industrial revolution felt the same way.
However, they also likely had little education, 1-2 of their children probably died of trivial illnesses before the age of 10, and they lived without electricity, indoor plumbing, running water or refrigeration.
Yes, blacksmithing your own tools is romantic (just as hand writing python code), but somehow I’m guessing our ancestors will be better off living at higher layers of abstraction. Nobody will stop them from writing python code themselves, and I’m sure they’ll do it as a fun hobby like blacksmithing is today.
Strokers is a good example that comes to mind.
Their job was to shovel coal into the firebox of a steam locomotive to keep the engine running. It required a lot of skill to keep the fire at just the right intensity.
When diesel and electric trains came in, that role disappeared.
my brain read "before the age of 10" as "before the age of 100" haha and wondered if llms are leading us to the point where we look back and realize dying before 100 is truly young??
Absolutely feel the same way. I have been writing software professionally for over 20 years and truly love the craft. Now that I am using Claude Code basically 100% of the time my productivity has certainly increased, but I agree there is a hollowness to it - where before the process felt like art it now feels like industrialized mass manufacturing. I’m hoping that I can rediscover some of what has kept me so fascinated with software in this new reality, because quite a bit of the joy has indeed been stripped away for me.
Yeah there are a lot of things in programming that I don’t enjoy, that I can now avoid doing, like learning how to do the same technical thing in a different language/framework.
But the stuff I enjoy doing is something I don’t delegate to the agent. I just wonder how long the stuff I enjoy can be delegated to an agent who will do it cheaper and faster.
Till theres an outage and the AI cannot help because they got rid of all the technical people or worse it generates sloppy underperforming code. You will continue to need devs for a good minute.
For this you lose a day of llm usage? Big deal. Don’t mean to pick on you at all but I also saw you with a comment above along the lines of ‘but what if the llm includes nsfw stuff?’ which again seems to be clutching at straws since we all still at least review whats being added.
I feel since the worst criticisms of llms in this thread boil down to down to ‘llm might be offline, might include nsfw stuff and might lead to non experts coding with poor results’ which are such weak criticisms that it’s is a damn good compliment to just how good they’ve gotten isn’t it?
I think it’s yet another layer of abstraction on top of complex ideas. There will always be room for people working on the infrastructure, tools, maintenance, and operations side of things. The biggest danger is if you stop learning because you trust AI to just do it.
I’ve been (carefully) using AI on some AWS infrastructure stuff which I don’t know about. Whenever it proposes something, I research and learn what it’s actually trying to do. I’ve been learning quickly with this approach because it helps point me in the right direction, so I know what to search for. Of course, this includes reviews and discussions with my colleagues who know more, and I frequently completely rewrite what AI has done if it’s getting too complex and hard to understand.
The important thing is not allowing AI to abstract away your thought process. It can help, but on my terms
Feeling the same, I just started a dialectic blog, with ChatGPT arguing with itself based on my prompt. It can have some biting words. I also have been using Suno AI to make a song with each post; while not perfect, it certainly allows for totally non-musical people like me to have something produced that can be listened to.
One post that fits with this feeling is https://silicon-dialectic.jostylr.com/2025/07/02/impostor-sy... The techno-apocalyptic lullaby at the end that the AIs came up with is kind of chilling.
It is similar to the days of first search engines - suddenly you didn't have to remember everything about your language/library or search multiple files.
You lose some, you gain some.
Although I'm not happy with code quality/style produced in most cases, or duplications, lack of refactorings to make the code look better/smaller.
Yes, software value appears to only exist for owners of products. You should pursue ownership of something. You can write anything, even a different type of Claude code!
I have the exact same feeling. 100x more engineering at cost of some questions.
yes remixing bunch of shit that already exists is good entertainment, i do it all the time.
no need to be sad though as you can tell from your own creations you haven't really created anything of any value. its all AI slop.
Terminal + Claude Code + a project folder it's all you need! Who knew?!
I never really enjoyed using full blown ide's anyway, setting them up was a chore, configuring QT to cross compile for various platforms was a project in itself, I always found that using a text editor and terminal the most logical combination for editing. Adding another terminal window with claude code to perform my requests while i debug, edit, test, review, i feel like i have graduated from coder-editor to project leader and there are no employee associated problems to deal with. All or my shelved side projects are complete as i wanted them and all in just a few months since March when Claude went big.
Very well written and a joy to read.
Yes, IDEs of the future will not look like those we have today. I too started with Cursor and similar VS Code enhanced IDEs. And ended up using Claude Code. And realised my terminal is more important for me now. So I migrated from iTerm to Ghostty (faster, lighter, more modern), Tmux and Tmuxinator, and NeoVim! Because I just cat/bat files and occasionally edit a file, if at all. Claude Code does the heavy lifting, I write specs and prompts, in NeoVim or Emacs. And I am loving my workflow.
I too use Claude Code for more then just generating code. Whenever I need to rewrite a config file (for zsh, neovim, ghostty, etc), I start Claude code and assign the task to do it. It will do the changes and even refactor my settings file in a few minute.
Lastly, I use it to ask questions about my codebase, refactor code, let it document my code, even make meaningful commits.
Pure awesomeness.
> Lastly, I use it to ask questions about my codebase, refactor code, let it document my code, even make meaningful commits.
I've found that CC writes really good commits; I added info about Conventional Commits [1] to the CLAUDE.md file with a few examples.
[1]: https://www.conventionalcommits.org/en/v1.0.0/#summary
Nice! Thanks for sharing.
Will CC automatically backup these personal configs you are talking about, say .zshrc, before making changes?
I keep track of all my config files via git. And after any change I make a commit (ask the tool to do that).
It’s a feature of the ide but pretty much all of them have an option for ais to only change code via a merge and review process. It’s not the ai changing directly, it’s heres a diff; accept, change, discard process after the prompt. Past there the ides usually have a local history and yiu can also use git.
I got a crude 6DOF wireframe renderer to boot as a system 6 (classic mac) app using c++ and retro68 a couple weeks ago using LLM tooling.
Soon, someone will duplicate MacOS entirely using LLMs!
A year or two ago I had a pretty simple thought. The idea was that LLMs make a great assistant for a skilled developer, a bad replacement for a skilled developer, and a dangerous assistant for an unskilled developer. That idea has mostly seemed to hold true as I have gotten more experience with LLMs.
I think I would revise it now to allow that LLMs can be really useful teachers for unskilled developers, however I don’t get the sense that that’s how they’re being used in many cases. It seems that it’s more common that unskilled developers using LLMs just vibe code their way through issues that they run into, never really gaining insight into why the thing they’re working on failed, and just push through until it seems like it isn’t failing anymore.
And that more or less reinforces my idea that they’re dangerous assistants in those cases where the developer is unskilled. It’s pretty much inevitable that this will introduce problems that make it through the process of creation not only unnoticed, but problems that the developer is incapable of understanding even if they are noticed.
I've been doing this for little utilities and it's been great.
I created one utility to show my launchctl/launchd tasks with status indicators for what is loaded/unloaded/running/failed (very much like the menu icon you get for OrbStack) and it works great. With claude it only took a couple of hours to get workig and dial it in the way I wanted.
Yeah, I've built a small iOS app and wordpress plugin for myself and love them. I'm curious if more people will do this in the near future, should we all be sharing our code on github?
Love this conclusion about how reduced friction from LLM assistance helps with that last 20% of actually shopping a side project:
> The most exciting thing about this entire journey for me is not the app I built, but that I am now able to scratch my coding itch and ship polished side projects again. It's like I found an extra 5 hours every day, and all it cost me was $200 a month.
I'll just have Claude read the tutorial to learn.
> I've been building software for the Mac since 2008
Ok, so they knew where Claude went wrong and could correct for it.
Right: tools like Claude Code amplify existing skills and expertise, they don't replace it.
The main issue here is you can’t acquire expertise by using llm to code for you.
So unless you have 15+ yoe better add more reps. You can always switch to llm code assist in a blink, there is no barrier to entry at all.
> The main issue here is you can’t acquire expertise by using llm to code for you.
Agreed, but I wish I had it as a teacher while learning. The amount of help my interns need from me has reduced by at least 50%, and what remains is the non-trivial stuff which is completely worth my time to coach and mentor them
I tend to learn via code examples. 20+ years of experience and the llms have taught me about some features of libraries i was using but overlooked.
I think they add to expertise honestly.
I wonder how common this is. Personally, looking at code examples can be helpful, but the vast majority of my learning comes from doing, failing, trying a different approach, succeeding, rinsing and repeating.
I also haven't had much luck with getting llms to generate useful code. I'm sure part of that is the stack I am using is much less popular (elixir) than many others, but I have tried everything even the new phoenix.new , and it is only about an 80 to 90% solution, but that remaining percentage is full of bugs or terrible design patterns that will absolutely bite in the future. In nearly everything I've tried to do, it introduces bugs and bug hunting those down is worse to me than if I just did the work manually in the first place. I have spent hours trying to coach the AI through a particular task, only to have the end solution need to be thrown away and started from scratch.
Speaking personally, My skills are atrophying the more I use the AI tools. It still feels like a worthwhile trade-off in many situations, but a trade-off it is
I also have not had much luck with LLMs when it comes to anything with substantial complexity.
Where I’ve found them best is for generating highly focused examples of specific APIs or concepts. They’re much better at that, though hallucinations still show up from time to time.
>The main issue here is you can’t acquire expertise by using llm to code for you.
You learn far far faster from reading code and writing tests than you do just writing code alone.
I've long suspected for years that the bottleneck to software development is in code generation not keeping up with idea generation.
You learn by trying to solve the problem by yourself.
This is not about syntax but about learning how to create solutions.
When you read solution you merely memorise existing ones. You don’t learn how to come up with your own.
And at the bottom it’s revealed that this costs $200/month. I have trouble convincing myself to give Autodesk $50/month and I need that software for my primary hobby.
And none of these AI companies are profitable. Imagine how much it will cost or how much it will be enshittified when the investors come looking for their returns.
If everyone whose code illegally trained these models won a copyright lawsuit against Claude it would suddenly not be so good at writing swift code.
Do we really want to bet on Disney losing their AI lawsuit?
Honestly I realize my comment is not adding much to the discussion but the AI fatigue is real. At this point I think HN and other tech forums would do well to ban the topic for posts like this.
Imagine if we were upvoting stories about how people are getting lots of coding done easier with Google and StackOverflow. It would be rightfully ridiculed as vapid content. Ultimately that’s what this blog post is. I really don’t care to hear how yet another dingus “programmer” is using AI to do their hobby/job for them.
> And at the bottom it’s revealed that this costs $200/month.
I started with the pay-as-you go plan; I'm currently using the Claude Pro plan at $20/month which is great for my use case.
> And none of these AI companies are profitable. Imagine how much it will cost or how much it will be enshittified when the investors come looking for their returns.
I suspect investors will give AI companies a lot of runway. OpenAI went from $0 to over $10 billion in revenue in less than 3 years. I know that's not a profit but it bodes well for the future.
(As an aside, it took Microsoft 22 years to reach $10 billion in revenue.)
Anthropic went from $0 in 2021 to over $4 billion in about 3 years.
In comparison, it took Twitter about eleven years after its founding in 2006 to become profitable in 2017. And for much of that time, Twitter didn't have a viable business model.
I don't think investors are concerned.
Regarding lawsuits, I'm sure the AI companies will win some and lose some. Either way, after all is said and done, there will be settlements and agreements. Nothing is going to stop this train.
But—chalk one up for Anthropic for winning their case, Meta getting a favorable ruling and all the rest [1].
> won a copyright lawsuit against Claude it would suddenly not be so good at writing swift code.
I suspect Claude is going to get really good at writing Swift--Anthropic is working with Apple [2].
> …but the AI fatigue is real
You might want to get off the internet if you're already suffering from AI fatigue; things are just getting started.
[1]: "AI companies start winning the copyright fight" -- https://www.theguardian.com/technology/2025/jun/30/ai-techsc...
[2]: "Apple Partners With Anthropic for Claude-Powered AI Coding Platform" -- https://www.macrumors.com/2025/05/02/apple-anthropic-ai-codi...
Maybe, but cost is going down fast on tokens, so this will likely be less of an issue.
On Gemini it is actually going up
On average the change was a cost decrease because of the thinking token price decrease.
For people who do not need thinking they can use flash lite which is cheaper as well.
Google's gemini-cli offers the most generous current free tier, 2500 daily requests. I'd recommend checking it out, since mostly everything said about Claude Code applies to it too
Buy things that make you more productive. Seriously. Assuming you’re not in some horrible trap of no money to buy things that improve your life you’re being ridiculous by not doing it.
Also i code all day and have yet to hit the $10/month cap on claude that the jetbrains library offer.
> And at the bottom it’s revealed that this costs $200/month
Claude Code costs $200/month really isn't something that needs to be "revealed."
Yes, I know of https://xkcd.com/1053/
> And at the bottom it’s revealed that this costs $200/month.
So, firstly, the limits on the $100/month plan are pretty high in my experience. I do hit them, but it takes a lot.
> I have trouble convincing myself to give Autodesk $50/month and I need that software for my primary hobby.
Before I used Claude Code, I absolutely would have agreed with you, $100–$200 every month is just a ridiculous amount for any software.
But... I don't know, I just think Claude Code really is that good.
> Imagine if we were upvoting stories about how people are getting lots of coding done easier with Google and StackOverflow.
You know, something I've thought about before (as in, before LLMs) is just how hard (impossible?) it would be for me to program without an internet connection. I need to be able to look stuff up.
I can absolutely imagine that if I was a programmer before widespread internet use, and the internet launched with things like Google and lots of resources out of the gate... yeah, that would be revelatory! I'm not saying AI is necessarily the same, but it's something to think about.
> Imagine how much it will cost or how much it will be enshittified when the investors come looking for their returns.
Indeed but one potential saving grace is the open sourcing of good models (eg. Meta’s Llama). If they continue to open source competitive models, we might be able to stave that off.
Has anyone tried this for command line applications? This could be a great way to develop some very specific/corner-case tools.
I write CLI tools with LLMs all the time. I even have a custom Claude Project that teaches the LLM to use inline script dependencies with uv so I can "uv run script.py" without first having to install anything else: https://simonwillison.net/2024/Dec/19/one-shot-python-tools/
I have a collection of tools I've built in this way here: https://tools.simonwillison.net/python/
A CLI tool is a native binary that I can run directly via e.g. `/path/to/binary`. If I need to use `uv run ...` to execute something, then that thing isn't really a CLI tool, it's an interpreted script.
Very cool! Thank you.
I've been doing this recently; I don't really know swift, and I wanted to see how well I could use almost 100% LLM to do my coding. For the most part, I have been able to get a pretty substantial app going, but once it gets to a certain size, things start to become hairy.
For instance, I am adding bluetooth capabilities to my app, and the bluetooth manager can be quite large, to the point where the LLM context window starts to become an issue.
When it gets to that point, the LLM can start making suggestions that don't make sense; it's starts forgetting what you had already done, it can start making architectural suggestions that contradict what you had been doing before. So if you follow them, you end up in this weird state mired in conflicting decisions. Almost like design by committee without the committee.
It's important to recognize this when it's happening rather than just blindly following what it suggests, so some expertise at engineering is still necessary even though the LLM can write okay Swift code.
Maybe you should ask the LLM to write concise documentation for the functions that it generates (or ask it afterwards). Then use that documentation in the context-window instead of the actual, more lengthy code. Compartmentalize your code into modules/functions with a clearly defined boundary (api).