Join Us Tuesday, September 2

One AI company cares little about being your digital best friend.

On an episode of the “20 VC” podcast published on Monday, Cohere cofounder Nick Frosst said that he’s not aiming to make the company’s large language model chatty and interesting.

“When we train our model, we’re not training it to be an amazing conversationalist with you,” Frosst said. “We’re not training it to keep you interested and keep you engaged and occupied. We don’t have like engagement metrics or things like that.”

The Canadian AI startup was founded in 2019 and focuses on building for other businesses, not for consumers. It competes with other foundational model providers such as OpenAI, Anthropic, and Mistral and counts Dell, SAP, and Salesforce among customers.

“One of the reasons why we’re focused on the enterprise is because that’s really where I think large language models are useful,” the cofounder said. “If I look at my personal life, there’s not a ton that I want to automate. I actually don’t want to respond to text messages from my mom faster. I want to do it more often, but I want to be writing those.”

Given its business focus, Frosst said that Cohere trains its model on very different data sets than other model providers.

“We generate a whole bunch of data to create like fake companies and fake emails between people at these fake companies and fake APIs within those fake companies,” he said, referring to synthetic training data.

The company was valued at $6.8 billion in a fundraise last month led by Radical Ventures and Inovia Capital. Cohere did not immediately respond to a request for comment from Business Insider.

Other LLM companies, such as Meta, Google, xAI, and OpenAI, have been pouring resources into making their models smarter, funnier, and more human-like as they race to monetize their chatbots.

In July, Business Insider reported that Meta is training chatbots that can be built in its AI studio to be more proactive and message users unprompted to follow up on past conversations. The idea is to interact with users a number of times, store conversations in memory, and reach out with an engaging prompt to restart a chat.

AI companies are also avoiding making their bots sound arrogant, which could drive users to a competitor or raise questions about bias.

Google and Meta have a list of internal guidelines for training their chatbots to avoid sounding annoying or “preachy,” Business Insider reported in July. Freelancers for Alignerr and Scale AI’s Outlier have been instructed to spot and remove any hint of a lecturing or nudging tone from chatbot answers, including in conversations about sensitive or controversial topics.



Read the full article here

Share.
Leave A Reply