Matt Motsick, CEO of Rippey AI
Matt Motsick is the CEO of Rippey AI, which applies machine learning, natural language processing, and generative AI to emails, documents, and chat conversations in the logistics industry so that manual tasks like responding to quotes and creating shipments are automated.
Matt was previously the founder/CEO of Catapult International, the industry leading ocean shipping rate management system. Within 8 years, he helped grow Catapult as a market leader with over 150 employees with client names of FedEx, Expeditors, DSV, Carnival Cruise Lines, and XPO Logistics. Catapult was acquired in 2015 by Warburg Pincus.
It seems that there is an AI startup born every week. With new entrants coming into the LogTech market at a fast pace, the question many incumbents are asking themselves is: “How can we maintain our competitive position and build a ‘moat’ in our space?”
There are several factors in building a competitive moat:
Developing a Deep Niche: By focusing on a particular problem in the market, AI companies can develop specialized products that fit a specific need. If a company can specialize in core competencies with AI tools, it creates a high barrier for another company to compete against. This approach helps maintain a focus on what they do best — innovating and leading in their specific domain.
Brand and Reputation: Companies that are early adopters of effective AI technologies can establish themselves as leaders in their field, enhancing their brand’s credibility and reputation. As a first mover, companies can set the standard on being perceived as a technology leader in its particular space, which leads to the next point.
Stickiness: Once a company’s AI solutions are deeply integrated into a customer’s operations, the switching costs become significant. This lock-in effect can be due to the customization of the AI to specific tasks, the integration with existing systems, or the reliance on continuous updates and improvements.
The most important factor in building a moat in AI, however, is building (creating an internal or in-ouse model) vs. buying (paying for an external solution)
The Case for Buy
Applied AI companies can quickly build competitive moats by leveraging third-party model platforms, often in the form of Large Language Models (LLMs), instead of developing in-house solutions. Below are some advantages of using third party LLMs:
Cost: Initially, as a startup, costs are an important factor in getting off the ground. Developing AI capabilities in-house requires significant investment in R&D, talent acquisition, and infrastructure. Third-party platforms reduce these upfront costs by providing access to state-of-the-art AI tools and technologies on a “pay per drink” subscription basis. For a young startup, it’s a small price to pay for the out of the box functionality, which has taken nearly 10 years to develop so far.
Rapid Deployment and Scalability: Third-party AI platforms allow companies to quickly deploy AI solutions and scale up as needed without the delays associated with developing their own AI infrastructure. Many of the complexities are abstracted away in terms of retraining and deployment lifecycles.
Access to Expertise: Third-party providers often have teams of experts who are at the cutting edge of AI research and application. By leveraging these platforms, companies can benefit from the latest advancements without having to hire and retain a large team of AI specialists. Due to the flexibility offered by current LLM models, their outputs can be guided using “prompts”, rather than more traditional methods which are complex and require advanced knowledge.
Focus on Innovation: By outsourcing the underlying AI technology, companies can focus more on innovation and applying AI in unique ways within their business models. This can lead to better customization and more effective use of AI tailored to specific business needs. More importantly, being first to market creates a strong competitive advantage (moat).
Lack of Proprietary Data: Machine learning models require vast amounts of data to train and effectively tackle the problem at hand. Data, often being the biggest determining factor in custom model effectiveness. Without this, it can be extremely cumbersome to build models within an organization. Due to the emergent and zero shot learning properties of LLMs, it is extremely fast to implement a MVP using external LLM providers.
The Case for Build
Cost: As a startup matures, it often processes more transactions. Due to this increase, the overall price third parties charge for inference on data increases dramatically. As opposed to third party solutions, once internal models are built, organizations have much more control over inference, sometimes exponentially reducing costs.
Data: When an AI company uses a third party, depending on the vendor, the data is often fed back into the vendor’s models for training. Congratulations, you are helping another company improve their model with data. Whereas, with a team of data scientists helping improve the internal model, data can be transformed into insights and trends – an added benefit which can provide the end-customer additional reporting benefits.
Security: Many corporations are becoming more concerned with what happens to their data. Essentially when an AI company utilizes an LLM, it is providing data from the corporation to a third party. With an internal model, the data is stored within the firewalls of the AI company and nowhere else.
Model Applicability: Often, tasks don’t need the full computational power of LLMs. An example of such a case is sentiment analysis, or rather detecting opinion present in text (good, neutral, bad). One might see a possible opportunity here to detect the opinions of amazon reviews. Luckily, sentiment analysis techniques have existed for over 20 years, and have been fully optimized and refined to be blazingly fast and cheap. While it would be easy to implement something like this using a prompt, this would be a rather ineffective use of LLMs.
Less Replaceable: As a company begins to scale, usually competition surges in a particular domain. If your core product revolves around a prompt (a block of text describing your goal to an LLM), it simply is a matter of time before someone else writes something similar. A clear way to truly solidify your position in the market is to develop either an amazing product that lightly leverages these platforms, or to build models yourself.
In summary, the answer to Build (Internal Model) vs. Buy (LLM’s) depends on the stage where your company is and how much data will be processed with the AI product.
For instance, if you are a startup with less than two years of existence and budget is limited, the Buy route is the best option. Incorporating the use of third-party AI platforms into the strategy of building competitive moats allows AI-centric companies to not only enhance their core offerings but also stay agile and cost-effective. This approach helps maintain a focus on what they do best — innovating and leading in their specific domain — while leveraging external expertise for underlying technologies.
However, if your AI company is more established and can afford a team of R&D, AI developers, data scientists with each one making over $100,000 USD per year, the investment will pay off over time when billions of data points flow into your database. The more data, the better the AI. Soon, your internal model creates its own competitive moat because it’s your model and no one else (hypothetically) has access to how your company is processing the data.
Enjoy this issue?
There’s more great insights coming. Signup for this popup AI newsletter series and make sure you don’t miss the next ones.