AI & RoboticsNews

Grounding LLMs in reality: How one company achieved 70% productivity boost with gen AI

Drip Capital: Revolutionizing Trade Finance with AI

Drip Capital, a Silicon Valley-based fintech startup, is leveraging generative AI to achieve a remarkable 70% productivity boost in cross-border trade finance operations. The company, which has raised more than $500 million in debt and equity funding, is employing large language models (LLMs) to automate document processing, enhance risk assessment and dramatically increase operational efficiency. This AI-driven approach has enabled Drip Capital to process thousands of complex trade documents daily, significantly outpacing traditional manual methods.

Founded in 2016, Drip Capital has quickly emerged as a significant player in the trade finance sector, with operations spanning the U.S., India and Mexico. The company’s innovative use of AI combines sophisticated prompt engineering with strategic human oversight to overcome common challenges such as hallucinations. This hybrid system is reshaping trade finance operations in the digital age, setting new benchmarks for efficiency in a traditionally paper-heavy industry.

 

Karl Boog, the company’s Chief Business Officer, emphasizes the scale of its efficiency gains: “We’ve been able to 30X our capacity with what we’ve done so far.” This dramatic improvement demonstrates the transformative potential of generative AI in fintech, offering a compelling case study of how startups can use AI and LLMs to gain a competitive edge in the multi-trillion dollar global trade finance market.

At the heart of Drip Capital’s AI strategy is the use of advanced document processing techniques. Tej Mulgaonkar, who heads product development at the company, explains their approach: “We process about a couple of thousand documents every day. We’ve struggled with this for a while, obviously right in the beginning we set up manual operations.”

Getting the most from today’s LLMs

The company’s journey with AI began with experiments combining optical character recognition (OCR) and LLMs to digitize and interpret information from various trade documents. “We started experimenting with a combination of OCR and LLMs working together to digitize and then make sense of information,” Mulgaonkar said.

However, the path to successful AI integration wasn’t without challenges. Like many companies grappling with generative AI, Drip Capital initially faced issues with hallucinations – instances where the AI would generate plausible but incorrect information. Mulgaonkar acknowledges these early hurdles: “We struggled a bit for a while, actually. There was a lot of hallucination, a lot of unreliable outputs.”

To overcome these challenges, Drip Capital adopted a systematic approach to prompt engineering. The company leveraged its extensive database of processed documents to refine and optimize the prompts used to instruct the AI. “We had hundreds of thousands of documents that we have processed over seven years of operations for which we had basically the accurate output data available in our database,” Mulgaonkar explains. “We built a very simple script that allowed us to pick out samples of input data, pass through the prompts that we were writing, get some outputs from a set of agents and then compare those outputs to what we have in the database as the accurate source of truth.”

This iterative process of prompt refinement has significantly improved the accuracy of their AI system. Mulgaonkar notes, “Engineering prompts actually really helped us get a lot more accuracy from the LLMs.”

Drip Capital’s approach to AI implementation is notable for its pragmatism. Rather than attempting to build their own LLMs, sophisticated Retrieval Augmented Generation (RAG), or engage in complex fine-tuning, the company has focused on optimizing their use of existing models through careful prompt engineering. 

Prompt Engineering’s triumphant return

In early 2023, The Washington Post declared prompt engineering “tech’s hottest new job,” highlighting how companies were scrambling to hire specialists who could coax optimal results from AI systems through carefully crafted text prompts. The article painted a picture of prompt engineers as modern-day wizards, capable of unlocking hidden capabilities in LLMs through their mastery of “prose programming.”

This enthusiasm was echoed by other major publications and organizations. The World Economic Forum, for instance, listed prompt engineering among the emerging AI jobs in their Jobs of Tomorrow report. The sudden surge of interest led to a flurry of online courses, certifications and job postings specifically tailored for prompt engineering roles.

However, the hype was quickly met with skepticism. Critics argued that prompt engineering was a passing fad, destined to become obsolete as AI models improved and became more intuitive to use. A March 2024 article in IEEE Spectrum boldly proclaimed “AI Prompt Engineering is Dead,” suggesting that automated prompt optimization would soon render human prompt engineers unnecessary. The article cited research showing that AI-generated prompts often outperformed those crafted by human experts, leading some to question the long-term viability of the field.

Despite these criticisms, recent developments suggest that prompt engineering is far from dead – it’s evolving and becoming more sophisticated. Drip Capital provides a compelling case study of how prompt engineering continues to play a crucial role in leveraging AI for business operations.

Drip Capital created a sophisticated process that combines technical expertise with domain knowledge. The company’s success demonstrates that effective prompt engineering goes beyond simply crafting the perfect string of words. It involves:

  1. Understanding the specific business context and requirements
  2. Developing strategies to maintain AI system accuracy and reliability
  3. Creating complex multi-step prompting strategies for advanced tasks like document processing
  4. Collaborating with domain experts in finance and risk assessment to incorporate specialized knowledge into AI interactions

The company’s AI system doesn’t operate in isolation. Recognizing the critical nature of its financial operations, Drip Capital has implemented a hybrid approach that combines AI processing with human oversight. “We have kept a very nominal manual layer that works asynchronously,” Mulgaonkar explains. The documents will be digitized by the LLMs, and the module will provisionally approve a transaction. And then, in parallel, we have agents look at the three most critical parts of the documents.”

This human-in-the-loop system provides an additional layer of verification, ensuring the accuracy of key data points while still allowing for significant efficiency gains. As confidence in the AI system grows, Drip Capital aims to gradually reduce human involvement. “The idea is that we slowly phase this out as well,” Mulgaonkar states. “As we continue to gather data on accuracy, the hope is that we get enough comfort and confidence that we’d be able to do away with it all together.”

Getting the most from LLMs

Beyond document processing, Drip Capital is also exploring the use of AI in risk assessment. The company is experimenting with AI models that can predict liquidity projections and credit behavior based on their extensive historical performance data. However, they’re proceeding cautiously in this area, mindful of compliance requirements in the financial sector.

Boog explains their approach to risk assessment: “The ideal thing is to really get to a comprehensive risk assessment… To have a decision engine that gives you a higher probability of figuring out if this account is riskier or not and then what the exposures are.”

However, both Boog and Mulgaonkar stress that human judgment remains essential in their risk assessment process, especially for anomalies or larger exposures. “Tech definitely helps, but you still need a human element to oversee it, especially for risk,” Boog notes.

Drip Capital’s success with AI implementation is partly attributed to its data advantage. As an established player in the trade finance space, they have accumulated a wealth of historical data that serves as a robust foundation for their AI models. Boog highlights this advantage: “Because we’ve done hundreds of thousands of transactions prior to AI, there’s so much learning in that process. And then using that data we already have to keep making things more optimized is definitely helping us.”

Looking ahead, Drip Capital is cautiously optimistic about further AI integration. They’re exploring possibilities in conversational AI for customer communication, though Mulgaonkar notes that current technologies still fall short of their requirements: “I don’t think you can have a conversation with AI yet. It has reached the extent of being a very smart IVR, but it’s not really something that can be completely handled off.”

Drip Capital’s journey with AI offers valuable insights for other companies in the financial sector and beyond. Their success demonstrates the potential of generative AI to transform operations when implemented thoughtfully, with a focus on practical applications and a commitment to maintaining high standards of accuracy and compliance.

As AI continues to evolve, Drip Capital’s experience suggests that companies don’t need to build complex AI systems from scratch to reap significant benefits. Instead, a pragmatic approach that leverages existing models, focuses on prompt engineering and maintains human oversight can still yield substantial improvements in efficiency and productivity.


Author: James Thomason
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!