Join Us Thursday, May 1

This as-told-to essay is based on a conversation with John Fitzpatrick, the chief technology officer at document management startup Nitro. It has been edited for length and clarity.

I have worked in AI for 15 years and am one of the original engineers behind Apple’s Siri. I’m currently the chief technology officer at Nitro — a software company that helps businesses manage and secure documents more efficiently.

Over the last year, I’ve seen a lot of AI washing, especially after ChatGPT took off.

AI washing is when companies exaggerate or misrepresent what their AI can actually do, just so they can say they’re using AI.

Suddenly, tons of apps that were just a new skin slapped on top of ChatGPT popped up. Businesses started rebranding their existing automation features as AI without making any real product enhancements.

I see this as similar to the “cloud” hype many years ago. Suddenly, every business became a cloud business. We’re seeing that with AI today. If you listen to earnings calls, every company’s talking about AI.

Recent AlphaSense data shows a 779% increase in mentions of terms like “agentic AI,” “AI workforce,” “digital labor,” and “AI agents” during earnings calls in the past year.

Almost every single startup now has to have an AI angle to secure funding.

Telltale signs of AI washing

There are a few different examples of AI washing.

One example is thin user interface layers on top of ChatGPT and maybe a small amount of prompt engineering. In some cases, that can be really valuable, but in many cases, it doesn’t add any particular value.

The other challenge with AI washing is companies rushing AI features to the market with these simple integrations without considering customer privacy or security.

In the worst cases, major players launch assistant features and update their terms and conditions to allow them to use customer data for training.

Then there’s the problem of relying on third-party public APIs and services vendors don’t control. This would mean sensitive documents would be sent to third parties, which is a major security risk.

In regulated industries, where they often deal with extremely important documents, you want to be very careful of hallucination and ensure you’re getting things like confidence scores from the models.

Many of our customers have invoices and financial data in PDF documents. It’s really important to be accurate when that data is extracted, or where the model has low confidence, to be very obvious.

Companies are also trying to do full automation flows without having that human check, and that’s where mistakes can happen.

In these types of industries, those mistakes could be really costly,

We’re moving beyond the hype and into the adoption part — where AI becomes an implementation detail of building really powerful product features.

Companies are learning what AI can and cannot do, building good features, and leveraging AI.

Because of that, investors and the market are starting to understand where there’s superficial AI versus actually adding real utility.



Read the full article here

Share.
Leave A Reply

Exit mobile version