From Angular to React with AI: A field report

Can you use AI to automatically rewrite a medium-sized Angular project in React? Nico has tried it and reports live from the lab.
Initial situation
A few months ago, we were given the task of updating an older Meteor project as well as possible. Unfortunately, it turned out that the Meteor ecosystem no longer supported Angular. Due to the Angular dependency, most of the important libraries, including MongoDB and Node.js itself, could no longer be updated.
Our advice: rewrite the frontend in React. With around 80 components and 20,000 lines of code, this was a challenge, but feasible. However, the effort required for manual refactoring was not within the budget.
But then I had an idea: actually a perfect case for an experiment! Can you efficiently rewrite an Angular project to React with AI? - So we suggested giving it a try, with a cost ceiling and risk in the event of failure. Five days - the bet is on!
Question
Like most development companies, we use AI tools such as Cursor or Copilot every day. These assistants help you to code ad hoc, within a limited framework, directly in the editor.
A different approach is needed for larger-scale auto-refactoring: a scripting approach. The question: Are today's coding LLMs suitable for this? Is it possible to configure AI tools so that you can let them do the work, sit back comfortably in front of the screen, eat a muesli and give a few instructions from time to time?
Setup
For the refactoring, I decided to use Aidera terminal-based AI coding tool; I also tried OpenRouter a kind of proxy API for various LLMs. As LLM I took claude-3-5-sonnet-20241022, in the Aider LLM Leaderboard I used VSCode as the editor. I also created a notes page for notes, which quickly filled up.
Pronti, partenza, via!
Test Suite
First, I tried to generate tests with Aider: Can I generate functional and comprehensive (Playwright) end-to-end tests without having to read the code myself?
Now we need to dive into how Aider works: Aider is a REPL, i.e. an interactive command line tool. It listens to various commands, e.g. /addto give the files of the code base as context. I gave Aider the file with all the routes of the website and then gave it the command:
/code create a test for /websites
Aider searched for the relevant files, read the code and generated a test design - and the first problems: the dependency detection worked well with imports, but not with Angular template tags. I had to try quite a lot with instructions until it worked reasonably well - and where it didn't work, I had to manually select the relevant files for the specific test (with /add) to the context.
After some trial and error, the first end-to-end test was generated. However, the process was not automatic and error-free. I would have been faster with the Playwright test generator, with which you can simply click through the interface (even if the tool has its pitfalls).
Refactoring
Again, I took the TypeScript file as a basis, with all the routes of the website, which served as a directory of components. I tried to have Aider rewrite route after route on React. As an orientation for Aider, I wrote a CONTEXT.md file, with a description of the undertaking and the approach.
With every learning I adapted the document. But hey... the AI could actually write its own instructions. So I started to let Aider update the document itself with each adaptation.
you created files while we were in dependencies mode. undo and update CONTEXT.md so it doesn't happen again
It quickly became clear that managing the files in context was very important: if there were too many or too large files, the quality would rapidly deteriorate.
And another tip: When refactoring, it is worth generating the new components in new files next to the old ones instead of overwriting or deleting the old ones directly, so that cross-comparisons are still possible until the end.
After tinkering around for a while, I was able to start generating reasonably coherent components on the surface - however:
- Code generation was slower and slower for longer files;
- partial solutions were often generated, and/or
- Dependencies were not imported.
But with a few fixes or retries, I was usually at a compilable state that was superficially visually correct when viewed in the browser.
It wasn't quite as automatic as I had imagined, but I finally got into a kind of flow and was able to generate and fix components relatively quickly. At some point I started operating 3 Aider instances in parallel in separate terminals and rewriting components, and within a few hours the whole website was "finished".
I knew that the moment of truth was approaching. There were three possibilities
- The generated code needs a thorough review and a few fixes and is then good to go
- I find systematic or deeper errors during the review, but these can easily be solved with better aider instructions and a new generation of the code
- Worst case scenario: I get to the point where it would be worth rewriting everything myself.
Review & Fixing
The review turned out to be quite exhausting for various reasons:
- AI-generated code review is unusual. The AI is powerful, but at the same time makes mistakes that a human would never make. For example, the AI replaced one placeholder with another for no reason. Or it re-implemented a component that was not in context every time it was used - a dozen times in total.
- Reworking generated code made the result gradually worse. It was more expedient to completely regenerate the components with a better prompt. However, this meant that useful approaches could be lost.
- Bugs could be anywhere: In the old code, in the generated code, in the newly generated fixes, in your own fixes, and in the testing pipeline. This made managing and cross-comparing the different versions challenging and even tedious.
- In the end, I had to solve more complex technical challenges myself, especially paradigm differences such as two-way binding (Angular) vs unidirectional data flow (React) or form validation.
- Each separate execution of the AI (thus each generated file) had a subtly different coding style, subtly different assumptions, approaches and errors. Learnings from the review of one file could not always be transferred to the next.
In short, the review-and-fix process became extremely frustrating. With great effort, I was able to bring a few components up to a satisfactory level, but more and more I had the feeling that I had built a minefield for myself. I realized that I could not guarantee the quality of the end result with reasonable effort. The experiment had failed.
Conclusion
AI remains helpful as an assistant - it is not ready as a scripting tool for extensive rewrites (at least Aider, as of December 2024).
What other approaches could be taken? I see the following possibilities:
- AI is really, really cheap: the whole experiment cost just under 60$. The question arises: what could be achieved if we invested much more in AI power? Could we perhaps guarantee better, more systematic code quality with a cleverly deployed swarm, a pipeline of AI agents?
- Another way that could be worthwhile is to operate at a different level of abstraction: instead of directly rewriting code with AI, you could write code mods supported by AI that rewrite the code at AST level (with libraries such as ts-morph, babel, jscodeshift, and angular-compiler). The advantage: once a refactoring tool is written, it works, and always 100% the same. The problem is that AIs are not trained at the abstraction level and can therefore only help to a limited extent (we are back in the established wizard mode).
- Last but not least: wait and see. The AI field is developing so rapidly that it will probably be possible to rewrite an entire website at once before the end of 2025. E.g. led cursor already during of my experiment, which in my opinion condemns Aider to obsolescence.
Not enough yet? Here you will find even more articles (and even a whole newsletter!) on AI.

Written by
Nicola Marcacci Rossi





