“So white-collar work, where you’re sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person most of those tasks will be fully automated by an AI within the next 12 to 18 months.”

The utterance of Microsoft AI CEO Mustafa Suleyman in an interview with Financial Times sounds different in 2026 than it would have sounded a few years prior. Already office work has been divided into two layers: the visible part, emails, decks, briefs, policies, and the machinery behind it that actually does the work. The former layer is more and more layers of creation; the latter is more and more layers of checking, routing and defending decisions made under machine assistance.
The destination as described by Suleyman was also in the form of human-level performance on most, or all, work-related tasks, which is more of an operational procedure, rather than a one-time breakthrough. In software engineering, he claimed that workers are doing the vast majority of their code production by writing AI-assisted code, and it is a much different relationship to the technology which had appeared in the past six months. This specificity is important as it implies that this change is not just about what models are capable of doing independently, but how fast organizations can integrate them into day-to-day practice.
The incisive nature of this change is evident in the roles that appear the least complex to decode into inputs and outputs. Microsoft scientists have attempted to gauge that exposure by ranking jobs by generative AI occupational applicability score, with translators, historians, and writers occupying the highest positions in alignment to present abilities, alongside customer service and sales jobs. It was also discovered that AI applicability was higher among occupations that require a Bachelor degree highlighting the similarity between the writing, summarizing, explaining, and document-handling occupations that characterize many credentialed jobs in the present day.
Practically, “automation” is often coming in as a job redesign, and not a clean job replacement.
An example of such a redesign is the safetying-layer role of editor, which has infiltrated content and communications departments. In one of the Guardian profiles of AI-driven career pivots, freelance writer Jacqueline Bowman wrote that she was contacted to polish AI-generated articles at approximately half her previous rate, only to find that it required more time than previous work due to her regular experience of having to rewrite them all. It now became necessary to fact check each and every bit of the articles. and at least half of it would be made entirely up, said she. The work ceased to be writing but rather verification and accountability remained human even as production went machineward.
The other variant is the emergence of narrowly focused so-called “agents” that do not even resemble chatbots. A due-diligence triage agent is able to read Confidential Information Memorandums, extract revenue and EBITDA, and grade deals relative to a set of criteria, handling 50 documents at a time and indicating inconsistencies over a set threshold. Within a few minutes, an NDA can be reviewed and non-standard terms detected and redlines suggested by a contract agent. The 2025 overview of V7 Labs defines these systems as working well when they maintain a tight scope, are integrated into the existing tools and offer a human review loop an architecture that transforms white-collar work into a series of machine-executed operations, with approval gates in between instead of human writing.
But already career logic is being redefined by human consequences. The Guardian also wrote of employees abandoning writing and editing and office support roles to take up trades, care giving and other manual work, which were viewed to be less easily automated. Simultaneously, scientists mentioned in such coverage warned that the current abilities should not be viewed as the fate, as adoption trends and labor markets are uneven even in the circumstances of the technology being enhanced.
The uncertainty that Suleyman has is condensed to 12 to 18 months. Whether it is entirely or partially arrived at by that time, the engineering truth is apparent: When a task is a structured input-output pipeline, i.e. documents in, decisions out, agents can be allocated slots, managed, and scaled. The question which is left by many offices will not be whether AI has the capability to write the work, but who has to certify it, and how should the certifying be appreciated.

