What does this mean for the future of programming languages? Will future updates to modern languages be patched with the goal of making it easier for humans to read and write (current objective) or for AI to parse? Will AI have to be retrained every time a target language gets updated?
Great questions! My guess is that we'll see the same balance between readability and expressiveness in programming languages going forward, maybe with even greater focus on readability to help humans spot mistakes in AI-generated code faster. The main problem is where the training data for these new programming languages will come from now that UGC forums like stackoverflow are all but dead.
Could well be that any AI-first programming language is synthetic enough that it can be verified using rules or something. AI coding is also much more successful for frontend applications for a reason - humans can do quick visual inspections of the result of the generated code - so perhaps down the line readability will be less important than verifiability.
The other problem is very acute in programming right now, and something I run into a lot in my day-to-day - LLMs are often trained on older versions of frameworks like React and NextJS, and getting them to answer based on the specific version of the framework you are using can be tricky, especially since these models are only updated once every couple of months
What does this mean for the future of programming languages? Will future updates to modern languages be patched with the goal of making it easier for humans to read and write (current objective) or for AI to parse? Will AI have to be retrained every time a target language gets updated?
Great questions! My guess is that we'll see the same balance between readability and expressiveness in programming languages going forward, maybe with even greater focus on readability to help humans spot mistakes in AI-generated code faster. The main problem is where the training data for these new programming languages will come from now that UGC forums like stackoverflow are all but dead.
Could well be that any AI-first programming language is synthetic enough that it can be verified using rules or something. AI coding is also much more successful for frontend applications for a reason - humans can do quick visual inspections of the result of the generated code - so perhaps down the line readability will be less important than verifiability.
The other problem is very acute in programming right now, and something I run into a lot in my day-to-day - LLMs are often trained on older versions of frameworks like React and NextJS, and getting them to answer based on the specific version of the framework you are using can be tricky, especially since these models are only updated once every couple of months