This month marked the maturation of AI in software engineering. For years, AI copilots had primarily served as autocomplete tools — useful but limited. But in mid-2024, these assistants began taking on far more advanced responsibilities, such as suggesting multi-line changes, detecting logic errors, generating tests, and providing in-line explanations.
At the forefront of this shift were tools like CodeRabbit, Amazon CodeWhisperer, and Qodana. CodeRabbit distinguished itself by integrating tightly with GitHub and GitLab workflows, offering inline suggestions and review summaries directly in pull requests. Developers could receive proactive comments on cyclomatic complexity, dependency issues, or even architectural recommendations.
Amazon CodeWhisperer, meanwhile, focused on security. It began flagging potential injection flaws, secrets in code, and insecure third-party package usage — essentially acting as a live secure code reviewer. Its integration into AWS Cloud9 and VS Code made it accessible for developers operating in cloud-native environments.
JetBrains’ Qodana, on the other hand, operated more like a CI/CD pipeline reviewer. It evaluated code quality based on user-defined policies, allowing teams to enforce coding standards, license compliance, and documentation completeness before any merge occurred. Combined with JetBrains IDEs, Qodana helped developers bake best practices directly into the development cycle.
This shift wasn’t just about tools. Engineering culture changed too. Teams began hosting “AI-enhanced code reviews,” where humans collaborated with AI to evaluate pull requests. In many organisations, AI handled the repetitive, boilerplate checks, freeing human reviewers to focus on architecture, readability, and long-term maintainability.
Measurable benefits began to emerge. A survey by Stack Overflow in Q2 2024 found that teams using AI code reviewers experienced:
- 20–30% faster pull request turnaround
- 40% reduction in time spent on trivial comments
- A significant increase in onboarding speed for junior engineers
However, it wasn’t all smooth sailing. Developers reported:
- Contextual misunderstandings: AI sometimes flagged issues out of context or misunderstood idiomatic code.
- Feedback noise: Copilots could generate too many suggestions, some of which were stylistic rather than impactful.
- Trust issues: Senior engineers were hesitant to approve AI-generated suggestions without thorough manual validation.
To address these concerns, many teams began versioning their prompt templates and creating internal “AI Review Guidelines.” These documents clarified which types of feedback could be trusted, when to escalate to human review, and how to interpret AI suggestions.
The trend also impacted hiring and org structures. Some companies created roles like AI Development Advocate or Copilot Lead — engineers responsible for training, evaluating, and configuring AI development tools across departments. Training programmes emerged to upskill engineers in prompt engineering and AI review tuning.
Enterprises began asking hard questions: How do we audit AI code reviews? Can AI-generated feedback be considered part of compliance documentation? Should pull request logs contain prompt histories? These considerations led to new integrations with logging platforms and version control metadata layers.
Looking forward, June 2024 will be remembered as the moment AI went from being a developer’s assistant to a peer reviewer. This shift changed the speed, scale, and quality of software development — not by replacing humans, but by augmenting them in a process that had long needed evolution.
The year ahead will bring further innovation, particularly as LLMs become more capable of reasoning across large diffs, understanding cross-file context, and interfacing with ticketing systems for complete feature implementation reviews. But it all began with the realisation that code review is a team sport — and AI is now on the team.
No responses yet