As we build out the Inscribe product, we often find ourselves solving a series of smaller tasks. These tasks might be something like "Is this document a bank statement" or "Is this address real?" or "Does this person work at this company?"
Each decision about which technology to use—whether a Large Language Model (LLM), an API, a machine learning (ML) model, or simple hardcoded logic—can impact the efficiency, cost, and long-term maintainability of our product.
The question then becomes: How do we choose the right tool for each task?
For many tasks, especially those that are well-defined and straightforward, starting with a "low tech" hardcoded solution often makes the most sense. These approaches—such as using regular expressions (regex), switch statements, or simple rule-based algorithms—are the quickest and most resource-efficient way to get a solution up and running.
Benefits of hardcoded logic:
However, these advantages come with significant trade-offs:
Drawbacks of hardcoded logic:
Hardcoded logic is an excellent starting point for simple tasks, but it quickly reaches its limits as complexity increases.
When a task demands more flexibility or functionality than hardcoded logic can provide, and when a third-party solution already exists, using an API can be a highly effective next step. APIs are particularly useful for tasks that involve data retrieval, verification, or integration with outside services.
However, relying on APIs also introduces several potential risks and challenges:
If an API exists to solve the exact problem we're tackling, and it's secure, cheap, and fast, it could make sense to use.
As tasks become more complex and require a higher degree of precision, machine learning models offer a robust solution. ML models excel in scenarios where hardcoded logic and APIs fall short, particularly in cases that involve pattern recognition, prediction, or decision-making based on large datasets.
Strengths of ML models:
However, the power of ML models comes with development and operational overhead:
ML models are best reserved for tasks where their high precision provides a clear advantage over other approaches, and where the high development and maintenance costs can be justified.
Large Language Models (LLMs) represent a significant advancement in AI, offering generative capabilities that extend beyond the scope of traditional ML models.
The latest frontier LLMs have been trained on so much data that they can often get you 80% of the way there performance-wise (or beyond) compared to a specialized ML model — of course, depending on use case. And the newest ones like Claude or GPT-4o are multimodal so they work for images as well as text.
Experimenting with an LLM approach requires no training data and no infrastructure, so it's a great place to start. You can even finetune at runtime by using in-context learning examples in the prompt.
LLMs can be slow and expensive per request, and providers often impose rate limits. But it's worth it in terms of iteration speed when building new features.
Like reasoning on arbitrary inputs. This is the big capability that is enabling Inscribe's AI analysts. It allows us to provide an equivalent of human-like reasoning which can analyze complex and subtle scenarios. And it's getting better fast.
Selecting the right tool for a given task is not just about solving the problem at hand—it's about doing so in a way that aligns with your broader goals of efficiency, scalability, and maintainability.
At Inscribe, we've found that a balanced, strategic approach works best:
At Inscribe, we continually explore and refine our approach to technology selection, ensuring that we continue to deliver the best possible solutions for our customers.
We're committed to helping you navigate these decisions and implement the best solutions for your specific needs. Our team has deep expertise in deploying the right mix of technologies to solve even the most complex problems.
Ready to see how Inscribe can help you? Request a Demo today to speak with one of our experts. We'll walk you through our approach, discuss your unique challenges, and show you how our AI-driven solutions can elevate your business. Whether you're looking to streamline operations, enhance accuracy, or innovate with cutting-edge AI, we're here to partner with you every step of the way.
Dan Gurney is the Tech Lead for AI Agents at Inscribe AI, where he guides the development of LLM-powered agents that automate tasks and enhance fraud detection for major fintech companies. Previously, Dan was the Engineering Team Lead at PredictionHealth, where he played a critical role in launching an AI-driven product that significantly improved medical care efficiency. His career also includes key roles at UBiqube and CarTrawler, where he contributed to building next-gen interfaces and large-scale web applications. Dan's technical expertise spans a wide array of technologies, including React, Go, Node, Python, and Kubernetes, and he is known for delivering scalable, high-quality solutions while mentoring other engineers.
Start your free trial to catch more fraud, faster.