Join Us Friday, June 13

Companies are increasingly finding ways to automate certain job functions with AI agents, but that doesn’t mean humans are going anywhere anytime soon.

Ali Ghodsi, cofounder and chief executive of Databricks, a data analytics and AI startup, told reporters on Wednesday at a conference in San Francisco that he believes “people underestimate how hard it is to completely automate a task.”

The CEO said humans will be in the loop for a long time since they add a level of supervision and accountability for the decisions made by artificial intelligence.

“I think in a few years, yes, we’ll have agents in many, many places, but there will be a human overseeing and approving every step, and you’re on the hook when you approve, when you click, ‘Okay,'” he said. “We all become supervisors.”

AI agents are essentially digital assistants that companies have deployed to help with various job functions, from administrative tasks and customer service to coding and research.

Klarna said last year that its customer service AI agents could do the work of about 700 human employees. In June, Sam Altman, CEO of OpenAI, said that AI agents are starting to act like junior-level employees.

On Wednesday, Databricks, which is valued at $62 billion, unveiled a new platform for companies to create tailored AI agents without writing any code.

Ghodsi told reporters that Databricks’ customer base looks for agents that could help with HR onboarding or answer questions about company policies.

The CEO said that while the use of AI agents will proliferate in organizations, that doesn’t mean humans will be completely replaced, since agents still make mistakes, he said.

Patronus AI, a startup that focuses on LLM evaluation and optimization, found that the more steps an AI agent takes, the higher its error rate.

Ghodsi pointed to the aviation industry as an example of a human remaining “in the loop” despite autopilot technology.

“Why do we still want two pilots in there? And why do we want the pilots to be trained — we don’t want them to be undertrained or asleep in the cockpit or drunk,” he said. “I think it’s just many, many orders of magnitude harder to get that last mile out. And given that the AIs occasionally get things wrong, in society, we want somebody to be responsible.”



Read the full article here

Share.
Leave A Reply