Join Us Sunday, March 29

This as-told-to essay is based on a conversation with Natalie Gilbert, a 30-year-old data scientist at AT&T whose father, Mazin Gilbert, was a researcher at the company’s Bell Labs division. The interview has been edited for length and clarity.

Growing up, I was super naive about what AT&T was.

What I knew about the company came through the lens of my dad, who was working on speech recognition. He worked with people like Yann LeCun, who was developing the capability to detect handwriting and convert it to text, and Dennis Ritchie, who created the C programming language.

My dad’s work with speech recognition and synthesis was the foundation for what I do today with generative AI. Everything I’ve built here has the same foundation he was working on: convolutional neural networks, which enable computers to process inputs like images and sound. It’s really cool to see how that foundation has evolved.

Their early discoveries have enabled us to work with AI agents and make them more autonomous.

As a child, I was pretty much in my dad’s office almost every day after school, and I remember watching him and his colleagues have heated discussions and draw crazy diagrams on the whiteboard.

That inspired me to start drawing my own decision trees and whatnot that were super nonsensical, but the experience taught me how to be creative and analytical.

One side project my dad and I worked on together was called Dr Bot, which was an early iteration of a large language model that could assess your symptoms and tell you where to seek care.

From whiteboarding to coding and back

What I do with AI agents really boils down to a bunch of decision trees that reason through how to get from point A to point B. It was something that I learned very early on with my dad.

There’s a lot of human interaction that’s increasingly important in the building of AI technologies.

In AT&T’s Chief Data Office, we’re working on a project that’s transforming how people think about using HR technology within the company. We’re basically eliminating the question of where to go to solve an HR problem by having an AI agent identify the relevant policy or procedure for a person’s situation. That’s no small matter in an organization as large and complicated as AT&T.

In my own work, I do use a coding copilot, or digital assistant, that helps me work a lot faster, but people who are developing AI tools still need to understand the technologies that underlie LLMs and machine learning models.

New AI tools are amazingly powerful, but they can’t do everything

As these copilots get more popular, people can run into trouble if they don’t understand how those technologies fundamentally work.

For example, if you don’t know how the code is actually handling an edge-case scenario, then your AI tools aren’t going to be any good.

At the same time, it feels like people need to learn something new every two months.

What I see changing with large language models is that they are much more natural-language-focused rather than coded. That means I actually spend most of my time doing prompt engineering, which isn’t coding at all; it’s using natural language to get machines to understand us.

It’s sort of ironic, because this is another form of what my dad did 30 years ago.

AI has changed so drastically in my lifetime, and now I feel like I’m representing him and representing his legacy. Continuing the work that he did feels surreal.



Read the full article here

Share.
Leave A Reply