AI doesn’t mean your developers are obsolete — if anything you’re probably going to need bigger teams
Software development will never be fully automated, so you’re going to need humans in the loop – and probably more of them
Software developers may be forgiven for worrying about their jobs in 2025, but GitLab Field CTO Marco Caronna believes the end result of AI adoption will be larger teams, not an onslaught of job cuts.
New research from GitLab shows a steep increase in the use of the technology across the profession, with 99% currently using – or planning to use – AI across the software development lifecycle.
Similarly, more than half (57%) of devs now use more than five tools for software development. This influx of AI tools is transforming the profession, with 75% revealing the technology will “significantly change” their roles within the next five years.
However, as the study noted, this is creating a “paradox” for teams. Concerns over security, compliance, and skills are rising, prompting a rethink of traditional operating frameworks.
More than two-thirds (67%) of respondents said AI is making compliance management “more challenging” for their organization, for example.
Speaking to ITPro in the wake of the report’s publication, Caronna said the current pace of change in software development means many companies are “facing issues in keeping up”.
That’s not because AI is difficult to use, either. Indeed, it’s “quite the opposite”, he told ITPro.
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“It's way too easy to use, which basically means that most enterprises who are conscious about compliance, security, costs, they find themselves in a situation where the adoption of these new tools becomes extremely complex,” he explained.
“They start having issues from a compliance standpoint and from a security standpoint, because the data path of information becomes unclear, and also from a cost perspective, it becomes a little bit more challenging.”
The rise of ‘AI platform engineering’
The solution here, Caronna suggested, will be an evolution of platform engineering practices that take into account AI, fusing the technology across various teams to fine-tune the development lifecycle.
Platform engineering has long been employed as a means to embed security and compliance considerations within the broader development lifecycle, creating closer synergy between developer, security, and operations teams.
But while AI may speed up development practices, it creates new risk surfaces, for example, through flaws in AI-generated code. More than three-quarters (78%) of respondents specifically highlighted this issue, noting they’d experienced problems with code created using “vibe coding practices”.
With this in mind, Caronna suggested augmenting the practice to compensate for these potential issues could be the key to safer AI-infused development processes.
“I do expect that we will end up with an ‘AI platform engineering practice’, or it’s probably going to have a better name. I’m not good at marketing messages,” he said.
“It’s going to be a team of experts who understand how to correlate the different agents, how to make them work together, and then publish those capabilities to the consumers,” Caronna added.
“So just like platform engineering teams to actually publish the capabilities that can be used in cloud providers, on-premises and so on and so forth; there's going to be a similar pathway for AI adoption.”
Shifting left is critical
A key component of this transition will be a concerted focus on “shifting left” to tackle lingering issues with AI.
The technology has already influenced this shift, research shows. A May 2025 study from AI security firm Pynt, for example, found enterprises are ramping up efforts to shift left to bolster software security and tackle AI-related risks.
Placing a stronger emphasis on security will help break down long-standing barriers and bottlenecks for developer teams, Caronna said. Shifting left essentially helps weed out issues earlier in the lifecycle, preventing headaches further down the line.
“If you don’t start implementing a practice where you move security towards the very beginning of the coding [process], at that point what happens is that your developers are going to be chasing security issues in production,” he told ITPro.
“The closer you get to production, the more expensive it becomes to fix security issues, but also performance issues or features and capabilities issues,” Caronna added.
“So the reality is that when we talk about the prerequisites [for AI adoption] this shift left, potentially even shift left testing, that's a prerequisite that needs to happen. Once those prerequisites are in place. At that point, we can really start talking about how to adopt AI so that we can write secure code by default.”
Devs aren’t going anywhere
This transitional period will ultimately result in larger teams, according to Caronna and GitLab. Three-quarters (75%) of respondents told the firm that as AI becomes more deeply embedded within development practices, this will require more engineers.
The thinking here is that software development is never going to be fully automated, and enterprises will be keen to keep humans in the loop across the lifecycle.
Indeed, only one-third of respondents said they’d trust AI to handle daily tasks without human review. This isn’t limited to developer teams, either. With more AI tooling, enterprises face greater security and compliance risks, meaning more staff in these respective domains will likely be required.
The outlook here runs counter to the prevailing sentiment among many developers and industry stakeholders over the last year. Across 2025, concerns about the impact of AI on developers have reached boiling point, spurred on by alarmist comments from leading industry figures.
Meta CEO Mark Zuckerberg and Salesforce chief Marc Benioff have both hinted at not needing developers due to the technology.
Caronna said businesses now face two paths looking ahead. They can lean into the benefits of AI, allowing it to fuel future growth and support the creation of better software, or fall into the trap of thinking it’s an excuse to cut staff.
A slew of businesses have opted for the latter approach, and ultimately, it could prove detrimental.
“AI is not fully replacing developers,” he said. “AI is going to give you a productivity increase, and at that point you can invest that productivity increase in cost cutting or improving revenues.”
“If, as a savvy company, what you’re looking for is to improve your top line instead of decreasing the bottom line, at that point you’re going to invest that in producing even more capabilities for your customers,” Caronna added.
“More capabilities, generally speaking, should mean more revenue and an increase in the top line. That productivity increase can really provide a huge competitive advantage to companies.”
Follow us on social media
Make sure to follow ITPro on Google News to keep tabs on all our latest news, analysis, and reviews.
You can also follow ITPro on LinkedIn, X, Facebook, and BlueSky.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
AI is coming to Ubuntu: Canonical exec teases future AI features and agentic workflow capabilities for version 26.10 — but on a ‘strictly opt-in basis’News A range of new AI features are coming to Ubuntu over the next year, according to maintainers, but only providing they’re of “sufficient maturity and quality”.
-
Everything you need to know about the GitHub Copilot pricing changesNews GitHub Copilot pricing changes mean users will be charged based on consumption, rather than a set number of credits
-
Developers are slacking on AI-generated code safety – here's why it could come back to haunt themNews While organizations are aware of the risks, many are spending little time or effort on tracking artifact versions, origins, and security attestations
-
Marc Benioff thinks AI isn't quite ready to replace software engineersNews Claims of AI replacing software engineers aren't fully reflected in big tech hiring trends, according to Marc Benioff
-
Four things you need to know about GitHub's AI model training policy – including how to opt outNews Users of certain GitHub Copilot plans will have interaction data used to train AI models, but can opt out
-
'AI doesn't solve the burnout problem. If anything, it amplifies it': AI coding tools might supercharge software development, but working at 'machine speed' has a big impact on developersNews Developers using AI coding tools are shipping products faster, but velocity is creating cracks across the delivery pipeline
-
‘I hope there's a world where AI is is complementary to humans’: Workday CEO vows to support HR workers as Sana integration automates more processes than ever beforeNews Sana from Workday seeks to bring agentic AI to Workday’s systems and beyond with natural language input and third-party connectors
-
‘AI tools are now able to transcend their initial training’: Researchers taught GPT-5 to learn an obscure programming language on its ownNews OpenAI’s GPT-5 learned to code in Idris despite a lack of available data, baffling researchers

