Irfan Malik on Why AI Won’t Replace Your Best Engineers — NeuralWired

Irfan Malik Says Stop Choosing Between AI and People | Here’s Why the Data Backs Him Up

Tech entrepreneur and AI strategist Irfan Malik has been making the case for a hybrid workforce model at a moment when enterprise leaders are being forced to pick a side. With real productivity gains stuck at roughly 10% despite massive AI investment, the math is starting to align with his argument.

The pitch from AI vendors has always sounded compelling. Replace expensive engineers with automated tools. Cut hiring budgets. Let the models do the work. But the actual numbers trickling out of enterprise deployments in 2026 tell a more complicated story, one that Irfan Malik, CEO of Xeven Solutions, has been anticipating for a while. He argues that companies fixated on AI as a headcount substitute are solving the wrong problem entirely.

Malik’s framework, built around applying advanced technologies to real-world challenges with skilled human oversight, isn’t contrarian for its own sake. It’s a response to a clear pattern: enterprises that pour capital into AI tooling without investing equally in the people operating those tools tend to see modest returns, diffuse accountability, and eroded team trust. The data, from McKinsey to independent engineering research, is starting to confirm that view.


The 10x Productivity Lie That’s Driving Boardroom Decisions

Somewhere between the demo and the deployment, something gets lost. AI vendors have consistently framed their tools in terms of order-of-magnitude productivity improvements. The phrase “10x engineer” entered the lexicon and never really left. Boards heard it, allocated accordingly, and in many cases began trimming headcount on the assumption that fewer people could now do exponentially more work.

The reality, measured carefully, is far more modest. A longitudinal study by DX covering November 2024 through February 2026 tracked AI adoption across engineering teams and found that a 65% increase in AI tool usage translated to a pull request throughput gain of just under 10%, roughly 9.97%, with the typical range landing between 8% and 12%. That’s meaningful. It’s not nothing. But it is emphatically not 10x.

Key figure: AI tool usage in software engineering rose 65% between late 2024 and early 2026. Pull request throughput, the actual measurable output, increased by 9.97%. The gap between adoption rate and productivity gain tells the whole story.

The McKinsey data is sharper still. The firm’s December 2025 State of AI survey found that while 88% of enterprises now use AI in at least one business function, only 6% qualify as high performers, defined as achieving a 5% or greater improvement in earnings before interest and taxes attributable to AI. The rest are spending real money for sub-threshold results. Only 6 out of every 100 companies are extracting the kind of value the boardroom was promised.

“Only one in 50 AI investments deliver transformational value, and only one in five delivers any measurable return.”

Gartner Analyst, via Harvard Business Review, February 2026

Those are brutal numbers. And they create a specific kind of organizational trap: companies that have already reduced headcount in anticipation of AI gains they haven’t actually achieved yet, now operating with fewer people and tools that are underperforming expectations. Recovering from that position is expensive, slow, and damaging to morale.

Why Irfan Malik’s Hybrid Model Is Gaining Traction Now

Malik’s position at Xeven Skills and Xeven Solutions places him at the intersection of enterprise AI deployment and workforce development. That vantage point shapes a philosophy that’s straightforward to state and genuinely difficult to execute: build AI systems that scale, then make sure skilled humans are the ones running them. The word “hybrid” gets used loosely in this industry, but Malik applies it precisely, not as a compromise position but as a structural requirement for any AI deployment that needs to handle novel problems, ethical trade-offs, or contextual judgment.

His argument resonates because it maps onto observable failure patterns. When AI tools operate without adequate human oversight, three things tend to happen. Hallucinations go uncorrected. Edge cases get mishandled. And when things go wrong, accountability diffuses across a system that nobody fully controls or owns. These aren’t theoretical risks. They’re the documented experience of enterprises that moved too fast toward automation without maintaining the human layer that catches what the model misses.

Malik’s core thesis: AI’s value ceiling is determined by the quality of the humans working with it. The firms seeing real returns aren’t the ones who replaced their teams, they’re the ones who trained their teams to operate AI effectively at scale.

This framing also addresses something the pure-automation argument tends to skip over: the nature of the tasks that actually drive competitive advantage. Large language models perform well on well-defined, repeatable tasks with clear success criteria. They perform poorly on novel logic, system-level reasoning, and anything requiring genuine ethical judgment. The work that creates strategic differentiation tends to fall into that second category. You can’t automate your way to a better product vision.

“To strike the balance between AI tools and human talent, L&D can lead the transformation by putting people first.”

Peter Hirst, Senior Associate Dean, MIT Sloan School of Management, via HR Dive

What the Deployment Data Actually Says About AI Limits

AI tools are, at their core, probabilistic engines trained on historical data. They predict outputs with reasonably high accuracy for well-structured tasks, somewhere in the 80-90% range for simple, repeatable work. That accuracy degrades meaningfully when problems require contextual reasoning outside the training distribution, multi-step logical chains with real-world dependencies, or outputs where being confidently wrong carries operational consequences.

The DX data makes this concrete. Engineering teams using AI coding assistants saw throughput improvements, yes. But the gains concentrated in low-complexity tasks: boilerplate generation, documentation, syntax corrections. The high-value work, architecture decisions, security reviews, debugging novel failure modes, remained stubbornly resistant to automation. The humans didn’t disappear from the workflow. They shifted toward the harder end of it.

Google’s approach illustrates what responsible scaling looks like in practice. Rather than treating AI as a headcount replacement, the company has deployed it to reduce time spent on routine HR and operational processes, freeing human capacity for work requiring judgment and relationship management.

“We always keep humans in the loop. AI supports deeper, more connected leader-employee relationships rather than replacing them.”

Arnish, Google Cloud HR, via Complete AI Training, July 2025

The governance gap is a significant factor here too. McKinsey’s data attributes a substantial portion of the performance gap between high and low AI performers to data quality issues and absent governance frameworks. AI tools are only as reliable as the systems they operate within. Companies that haven’t built those systems, data pipelines, oversight protocols, escalation paths, are deploying powerful tools without the infrastructure to catch their failures. That’s a human problem, not a technical one.

The Cost Calculus: AI Tools vs. Hiring Humans

The financial argument for AI-first hiring strategies has real substance, and it would be dishonest to dismiss it. Research from Appliview published in April 2025 found that AI-assisted recruitment reduces hiring costs by 20% to 50% compared to traditional methods, against a baseline average of $4,700 per hire. For organizations with high hiring volume, that’s a genuine budget line item worth optimizing.

The complication is in the ROI timeline. AI tooling has upfront licensing costs, integration costs, and the often-underestimated cost of retraining and governance infrastructure. When those are factored in alongside the modest productivity gains the DX data documents, the financial case for wholesale human replacement weakens substantially. The 6% high-performer rate from McKinsey suggests that most companies aren’t reaching the returns that would justify that trade-off.

Dimension AI-Only Approach Human-Only Approach Irfan Malik’s Hybrid Model
Upfront Cost High (licensing, integration, governance) High (salaries, benefits, recruitment) Moderate (tooling + targeted hiring)
Productivity Gains 8-12% on routine tasks; near zero on complex work Baseline; no amplification 10%+ on routine + human advantage on complex tasks
Scalability High for defined, repeatable tasks Limited by headcount High; humans govern AI scale
Novel Problem Handling Poor; hallucination and context loss Strong Strong; AI handles load, humans handle edge cases
Accountability Diffuse; error attribution unclear Clear Clear; human oversight layer preserved
Long-term ROI Uncertain; only 6% of firms hit 5%+ EBIT impact Predictable but ceiling-limited 250% ROI in 18 months when training investment is included

The Jobs Picture in 2026: Growth, Not Replacement

The workforce displacement narrative has been loud. It’s also, at the aggregate level, not yet supported by the employment data. CompTIA’s 2026 State of the Tech Workforce report projects 1.9% growth in US tech employment this year, adding approximately 185,000 net new jobs to bring the sector total to 9.8 million. More than 275,000 job postings as of January 2026 explicitly require AI skills. The labor market isn’t contracting. It’s recomposing.

That recomposition matters for how companies think about their talent strategy. The skills in demand are shifting fast. Roles requiring AI fluency, prompt engineering, model oversight, and AI-augmented analysis are growing. Roles focused on purely manual, rule-based work are shrinking. The companies navigating this well are the ones building internal training programs that move existing employees into the new skill areas, rather than replacing them outright.

📈

Tech Job Growth

1.9% sector expansion in 2026; 185,000 net new jobs projected by CompTIA.

🤖

AI Skills in Demand

Over 275,000 job postings in January 2026 explicitly required AI competency.

⚠️

Displacement Risk

32% of companies plan workforce reductions of 3%+ in the next 12 months, per McKinsey.

📊

Data Science Growth

Data science roles projected to grow 420% by 2036 as AI demands analytical oversight.

The concerning number is the 32% of companies planning workforce reductions of 3% or more over the next year, also from McKinsey. That’s a meaningful portion of the market making cuts, potentially before the AI tools intended to replace that capacity are delivering reliably. If the DX and Gartner data on actual productivity gains holds, some of those organizations are going to find themselves understaffed for the complex work AI can’t handle, with tools that are producing roughly a 10% throughput improvement in the domains where they work at all.

The Training ROI Case That Most CFOs Haven’t Seen

There’s a number that should be in every workforce planning conversation but rarely is: companies that invest in AI training programs for their existing employees report a 250% return on that investment within 18 months. That figure, drawn from corporate training research, reframes the entire build-or-buy question. The calculus isn’t “AI tools versus headcount.” It’s “AI tools plus trained people versus AI tools alone.”

The training gap is real and measurable. Surveys across the MENA region found 30% of employees reporting that their employers had made little to no investment in AI-related upskilling. That’s not a technology problem. It’s a management priority problem. Organizations that treat AI deployment as a capital expenditure question without an accompanying talent development budget are leaving most of the available value on the table.

Malik’s work through Xeven Skills addresses this directly. The argument isn’t that AI is overhyped, it’s that the returns accrue to organizations that invest in people capable of directing, correcting, and extending what the tools do. That’s a more demanding operating model than simple automation, but the performance data suggests it’s the one that actually produces the returns the boardroom wants.

Frequently Asked Questions

Should companies invest more in AI tools or in hiring right now?
The McKinsey data suggests neither in isolation is sufficient. With 88% of enterprises already using AI but only 6% achieving high performance, the bottleneck isn’t access to tools, it’s the capability to operate them well. Companies that prioritize upskilling existing talent while selectively adopting AI tools see better outcomes than those treating the two as substitutes.
Will AI actually replace tech jobs at scale?
CompTIA’s 2026 data projects net growth of 185,000 tech jobs this year. The composition is shifting, AI-fluent roles are expanding rapidly while purely manual roles contract. Mass replacement isn’t happening; redistribution is. The 32% of companies planning cuts, however, signals real risk for specific roles and sectors.
What are realistic AI productivity gains for engineering teams?
DX’s longitudinal study covering late 2024 through early 2026 found gains of 8% to 12% in pull request throughput among engineering teams with 65% AI tool adoption. That’s a real improvement, concentrated in routine tasks. Complex work, architecture, security, novel debugging, showed minimal automation benefit.
What does a good AI training program for employees look like?
Effective programs combine structured learning with practical application: peer sessions where teams work through real AI-assisted workflows, clear escalation protocols for when human judgment is required, and ongoing feedback loops that measure actual output quality rather than just tool usage. Organizations tracking this carefully report 250% ROI within 18 months.
Who is Irfan Malik and why does his perspective matter here?
Irfan Malik is the CEO of Xeven Solutions and the founder of Xeven Skills, focused on applying advanced technologies to real-world enterprise challenges with human oversight at the center. His hybrid model, scale AI with skilled teams rather than replace skilled teams with AI, is gaining traction precisely because the enterprise performance data from 2025 and 2026 aligns with its core predictions.

What to Watch: Irfan Malik and the Hybrid Model’s Next Test

NeuralWired Signals
01 Agentic AI pilots in 2026: The next wave of enterprise AI involves autonomous agents running multi-step workflows. How organizations structure human oversight for these systems will determine whether the 6% high-performer rate improves or contracts further.
02 The 32% workforce reduction cohort: McKinsey flagged that nearly a third of companies plan significant cuts. Tracking their AI performance 12 months out will test whether the automation-first playbook actually delivers, or leaves them unable to handle the work AI can’t do.
03 Irfan Malik’s scaling thesis: As Xeven Solutions and Xeven Skills expand, their performance data will offer one of the cleaner real-world tests of whether the hybrid model at scale delivers the returns the 250% training ROI figure suggests it should.
04 Governance as the differentiator: McKinsey’s high-performer cohort consistently cited data quality and governance infrastructure as separating factors. Watch for governance tooling to become its own competitive category as enterprises realize the human oversight layer needs its own stack.

The debate over AI versus human talent has been framed as a zero-sum choice by people who have an interest in selling tools or in appearing decisive. The deployment evidence from 2025 and 2026 suggests it was never that simple. Productivity gains are real but modest. Transformation is rare. The companies that are getting serious returns, that 6%, are doing so by building capable human teams who know how to direct AI effectively, not by ceding that capability to the tools themselves.

Irfan Malik has been making this argument before the performance data caught up to it. Now the data is here. Whether the industry adjusts its expectations accordingly, or continues chasing the 10x number that hasn’t materialized, is the defining workforce question of the next two years.

Stay ahead of the AI workforce shift. NeuralWired covers enterprise AI performance, workforce strategy, and the real numbers behind the hype, every week.
Subscribe Free

Leave a Reply

Your email address will not be published. Required fields are marked *