Palantir’s CTO warns AI job panic hides the real shift in human work

More than 23 million jobs in the U.S. already have positions with at least half of the work being automated. This figure marks out like a demolition crew on its way. But the more practical reading is less apparent: work is being sliced once again into that which software can successfully perform and that which requires human judgment.

Image Credit to bloomberg.com | Licence details

Palantir CTO Shyam Sankar has put the public discussion as a discrepancy between what individuals are informed regarding AI and what manifests in operational environments. In such a narration, AI is not a substitute of workers, but rather a leverage, the systems that magnify the performance of the professionals who retain the decision-making authority and the repercussions.

The data on workforce supports the notion that a single switch does not exist in the concept of “automation.” A massive survey of U.S. employees revealed that 15.1% of U.S. jobs have at least half of the work automated, and 7.8% of jobs have at least half of the work done by generative AI. Quieter yet more decisive even in the same study, nontechnical barriers were found to be a limiter in 63.3% of jobs: human interaction requirements, regulatory requirements and practical economic reasons why automation could not be rolled out at scale. This does not imply that the organizations do not use AI; it just means that the organization integrates AI with workflows where human beings are present and responsible. The simplistic explanation of the move is that the job titles remain the same, but the task makeups are changed.

One of the factors to explain AI talk spiraling to replacement narratives is that, most of the measurements are on what can be accomplished by a model alone. More recent methods of research are concerned with complementarity, in which machines are good at patterning and recalling and human beings are good at context, values and social coordination. This has been systematized in the MIT Sloan study, “The EPOCH of AI: Human-Machine Complementarities at Work” which classifies this into an index of human-leaning capabilities, such as Empathy, Presence, Opinion/Judgment/Ethics, Creativity, and Hope/Vision/Leadership, and traces the rise in workload in real occupations. It also assesses a risk-of-substitution score and a potential-for-augmentation score, a framing that considers AI as a factor of productivity, as opposed to a factor of employment. The authors noted that new work in the O*NET database was more likely to be associated with higher EPOCH scores than older work, indicating that the job market had been gradually shifting towards work with less formalisability and automatability.

Such repositioning has engineering implications within companies. When AI is being placed in the role of “the worker,” and the organizations are optimizing on throughput and cost, then learn the failure modes, biased data, brittle edge cases, and opaque outputs which are difficult to challenge. When placed as the “instrument,” organizations must design to be oversighted (interfaces, audit trails, and escalation paths) so that humans remain in charge when the system is uncertain or the stakes are high.

The HITL design has also served as a viable template to that oversight. The approach identifies people at specific stages in the lifecycle of data labeling, training and tuning, output validation and continuous monitoring such that models are improved and accountability is legible. This should not be aimed at slowing down automation, but instead to make certain that automated steps are subject to questioning, and correction or even termination before mistakes become institutionalized.

Governance structures attempt to repeat those safeguards. Accountability, transparency, fairness, privacy, and ongoing monitoring are formalized in the longest-lived programs in enterprise settings (based on NIST AI Risk Management Framework, the risk tiers of the EU AI Act, and the ISO/IEC 42001 management-system approach). The more agentic AI systems are, the more they can take the initiative, and the more they can be regarded as agents rather than reactive systems, the higher the demand on such operating discipline, since autonomy without logs and review transforms errors into policy.

It is in this gap that Sankar is provoking: people are geared towards looking at layoffs, but the more serious modification is redesign. The work places that most are benefiting with use AI as one of the means to transfer the routine thinking into code after which invest in the human abilities that remain – judgment under uncertainty, ethical reasoning, collaboration, and leadership since they are the aspects of the work that are still not able to be reliably codified in them.

spot_img

More from this stream

Recomended

Discover more from Modern Engineering Marvels

Subscribe now to keep reading and get access to the full archive.

Continue reading