advertisement
PAID CONTENT

Artificial intelligence and the evolution of trust

Two AI experts weigh in on the promises—and the perils—of this fast-evolving technology

Artificial intelligence and the evolution of trust

Major advances in artificial intelligence (AI) have leaders in virtually every industry thinking deeply about what opportunities and challenges the technology might hold for their organizations. Trust in AI technology—in its promise and the ability to harness its power—is essential for businesses hoping to thrive amid future disruptions.

Vijay Karunamurthy, field CTO for Scale AI, which trains AI models, and Chau Dang, product manager for NVIDIA Morpheus, provider of cybersecurity frameworks to developers, discussed the future of AI in a recent conversation hosted by Fast Company and Deloitte & Touche LLP. Here are four takeaways from the event.

1. Artificial intelligence will transform knowledge work . . .
AI holds great potential for improving productivity, potentially allowing a three-hour job to be knocked out in an hour with AI assistance. It’s like giving workers a copilot or an assistant, helping them reach a new level of productivity.

The impact could be substantial across industries. For instance, when an insurance claim comes in related to vehicle damage, AI could gather and synthesize information from pictures of vehicle damage, background on flooding in the region where the claim was generated, and other data to produce an initial recommendation on whether to approve the claim and, if so, for how much. AI could also flag areas needing review by a human insurance adjuster. “These models can attend to a large volume of data much faster than any human being would be able to review them,” Karunamurthy says.

2. . . . but AI still needs a human assist.
Large-language models are typically trained on texts across the internet. But not all texts are created equally: These models might learn just as much from Aristotle as they do from online conspiracy theories. False or biased material can send them off track, and they can struggle with subtlety. Math problems, for instance, are easy for AI to handle. But qualitative judgments are more challenging. “Thousands of people have asked a model to generate a poem for them,” Karunamurthy says. “But there are nuances about what makes a good poem, what defines style and aesthetics.”

To improve, AI models need training from humans with real-world expertise. From a talent perspective, those likely to be most successful are people with humanities or legal backgrounds as well as computer science expertise. People with these skills are able to understand why the model gave a certain response, then train it to produce better and more useful output.

For instance, prompt engineers specialize in helping align AI behavior with human intent. “The people that do it really well understand the nuances of how these models think and use techniques—like chain of thought—to help the model walk through a possible answer in a structured way,” Karunamurthy says.

3. Personalization produces better interactions.
Just like Siri can recognize an individual voice, AI can be taught to recognize a person’s behavioral data and patterns. This can lead to richer, more appropriate interactions. For example, an AI assistant might know from your inbox that you’re planning a trip to Vancouver this summer and that you’re shopping for hiking boots. It could then suggest pants and a jacket that are appropriate for the climate and your activities—a step well beyond most e-commerce platforms’ current recommendation engines. “A lot of times, our context is actually more valuable than our past data in terms of determining what we want right now,” Dang says.

Of course, this level of personalization will require attention, and potential regulation, related to privacy and security. As AI advances, developers must be transparent and engage with stakeholders early on about how personal data is used. “It’s going to be incredibly important to let us all have control over how these models think about people because, by default, they’ve all made assumptions about us based upon what we’ve shared on the internet,” Karunamurthy adds. “We should be in control of that decision-making.”

4. AI can be employed to navigate massive challenges.
AI has potentially transformative uses for society as a whole. “We often think about the power of AI for chat bots and things like that,” Dang says. “But there are a lot of real human problems that we can be solving for.”

advertisement

For example, AI simulations can run multiple scenarios around complex topics, such as global warming or genomics. And self-driving vehicles powered by AI can take humans out of the driver’s seat to move material around dangerous areas such as construction sites and mining operations. “Wherever human beings are threatened by really tricky environments or fast-changing conditions, you’ll see adaptation to self-driving techniques that make robots and other vehicles safer,” Karunamurthy says.

That helping hand goes beyond AI piloting autonomous vehicles. For example, Karunamurthy cites the successful deployment of Scale AI’s Automated Damage Identification Service to support Ukraine in its war with Russia. “In Mariupol, we’re able to apply the power of large language models, plus object detection models, to look at overnight rooftop damage so repair crews and rescue workers can navigate on the ground in real time,” he says. “That’s the sort of work that takes the copilot model and applies it to mission-critical [areas].”

Watch the full video below.  

ow ad

About the author

FastCo Works is Fast Company's branded content studio. Advertisers commission us to consult on projects, as well as to create content and video on their behalf.

More

advertisement