The Generative Leap: Where LLMs are Outpacing the Competition by 10x

Startups need to be 10x better than the competition to disrupt a market. With generative AI being a 10x technology in many ways, let’s explore how AI-native startups are using it to create outsized value.

The Generative Leap: Where LLMs are Outpacing the Competition by 10x

A well-known theory about startups is that they need to be 10x better than the competition to disrupt a market. Anything less, and it’s not worth it for users to switch from incumbents. One way for products to make a quantum leap is by piggybacking on technological improvements that are themselves 10x upgrades—the Internet, smartphones, and of late, generative AI.

According to a recent A16Z analysis, 80% of the top-50 AI companies didn’t exist a year ago. Let’s take a look at how some of these newcomers are challenging the old guard with LLM-powered solutions. We’ll survey five key use cases where generative AI is already delivering or poised to create 10x more value than non-AI predecessors.

1. Faster and more intuitive information retrieval

Likely the biggest household name in AI, ChatGPT, just weeks after its launch, was cause for a “code red” at Google. As the Internet soon discovered, the LLM-powered assistant is capable of everything from writing poetry to refactoring code. But even more simply, ask ChatGPT a question and it replies in natural language with a (usually but not always) accurate answer.

How does this stack up against legacy search engines? ChatGPT isn’t categorically better than Google Search—hallucinations are one major hurdle it has yet to overcome. Still, the AI chatbot highlights some key frictions we’ve been living with in how we access information:

  • Search queries are unnatural and low fidelity
  • Users have to comb through a list of 10+ links before finding the information they want
  • Follow-up questions involve querying again from scratch

For a simple task like checking the score of a basketball game, these pain points might matter less. But for something more involved like planning a 3-day travel itinerary, ChatGPT transforms what could otherwise be an hour of perusing SEO-friendly travel blogs into a minutes-long conversation. Unlike conventional search engines which only return content that already exists online, LLMs excel at synthesizing disparate sources, so they can answer questions that you may be the first person to ever ask.

ChatGPT can compose straightforward travel itineraries in under a minute.

Google isn’t being complacent. It’s invested heavily in Bard and other parts of its generative AI ecosystem. If anything, that signals that LLMs are the future of information retrieval, whether that future is led by Google or something new like You.com.

But before LLMs displace general-purpose search engines, we may see them 10x their competition in specific niches. ChatPDF, ranked 28th in popularity on A16Z’s list of AI products, focuses on retrieving information from user-uploaded PDFs. Other startups like Juicebox and Robin AI are applying AI to simplify people search and legal-contract search respectively.

2. Customer support at lower cost without sacrificing quality

It’s increasingly hard to find a phone number to get customer support, with businesses funneling customers to live chat agents instead. The trend makes sense—it’s far cheaper to serve a high volume of customers over chat than over the phone.

Then came automated, “if-this-then-that” chatbots, which once again cut costs by an order of magnitude, but also deeply frustrated consumers. These bots follow a rigid decision tree that leaves little room for nuance. In practice, this first generation of automated chatbots acted merely as gatekeepers to human agents who would often have to intervene.

It’s a matter of when, not if this conversation will need a live representative.

With their ability to make sense of natural language, LLMs largely solve this quality problem. LLM-powered chatbots are far more capable of understanding most user inputs and can also produce more sophisticated outputs than their automated predecessors. Granted, LLMs don’t have the same emotional range as a human, so it would be a stretch to entirely eliminate human support agents.

Still, compared to live support, conversational AI isn’t just cheaper, it can be implemented in a fraction of the time without the need to train a fleet of agents. And because LLMs can process larger amounts of information than a human ever could, they enable more personalized conversations tailored exactly to a customer’s needs. As a customer, you also don’t need to worry about explaining your situation to each subsequent manager in an escalation.

Conversational AI’s applications span a wide variety of use cases, beyond just customer support. For instance, Drift builds products for conversational commerce, including sales and marketing. Helphub by Commandbar helps customers access help center content through chat instead of having to read docs. Meanwhile, Rasa is a platform for building any type of conversational AI interface, whether for internal use cases like IT support or external ones like scheduling product demos.

3. Breaking down barriers to content creation

Creating content, whether that’s text, images, audio, or video, is time consuming to say the least. For example, every minute of an edited video likely took between 30 minutes and an hour to edit (not to mention time spent filming). And that’s assuming the editor is a trained professional. Building the expertise to create high-quality content takes years of practice and training in itself.

Some of the most popular consumer AI apps present a shortcut to at least approximating the work of a creative professional. Can Leonardo.ai outdo Leonardo Da Vinci? No, but that also misses who these products are really competing with. AI content creation apps more than 10x the ability of non-creatives to produce creative assets.

An AI-generated interpretation of the Mona Lisa as a square image.

Rather than breaking the bank to hire a marketing agency, a small online retailer can use PhotoRoom to create product images of its entire catalog. Those images might form the basis of a product video if paired with a script written by Copy.ai. With Speechify, the business doesn’t even need specialized audio equipment to turn that script into a voiceover. If the business wanted to expand into a new market, translating everything into another language would be trivial.

Even if generative AI can’t currently match a professional’s quality, it can still add value to full-time creatives and amplify their work. LLMs can help with ideation, take a first pass on editing, and perform repetitive tasks. Humans, in turn, can worry less about the mechanics of making content and focus more on actually being creative.

4. Democratizing data analysis with natural language

To illustrate generative AI’s versatility, let’s jump from creative applications to something very quantitative—crunching data. There’s no shortage of legacy data tools we could examine, but if there’s one category that comes to mind it has to be spreadsheet software.

Spreadsheets are incredibly powerful tools, but they’re also complex. Pivot tables are infamous among Excel users and are just one of many spreadsheet components that can be a pain to debug when they inevitably break. Much like content creation, people spend years mastering spreadsheets (among a host of other data tools) and the barrier to entry is large for the uninitiated.

Enter generative AI. DataSquirrel’s thesis is that anyone can analyze data without specialized knowledge. It uses AI to handle data cleaning, analysis, and visualization, each of which can take hours to do in a spreadsheet. ThoughtSpot, who we interviewed recently, emphasizes how data is a means to an end. In particular, ThoughtSpot Sage is a LLM-powered search interface that lets business decision makers spend their time asking good questions instead of wrangling data.

ThoughtSpot Sage uses LLMs to process natural language search queries and surface data insights.

LLM tools do more than just make data accessible to all. They also operate at a scale that a human can’t match, even with computer assistance. Say your business has dozens or more metrics it regularly checks. Rather than hard-coding alerts for all the possible ways those metrics could go wrong, you can ask an AI assistant to notify you about unusual patterns with minimal configuration. Check out Narrative BI for a generative analytics platform that extracts actionable insights from raw data.

Similar to Google Search, the current market leaders in the data space aren’t sitting back either. Time will tell whether AI-native startups will win over the market or if the incumbents successfully adapt.

5. Smarter workflow automation with less effort

Sometimes, 10x improvements hide in plain sight, failing to gain significant traction. To continue with the earlier spreadsheets example, running an Excel macro certainly beats traversing each row of a table and repeatedly applying an operation. There’s easily a 10x time savings to be had from thoughtfully implementing macros. That said, what fraction of the general population actually knows how to record a macro?

Likewise, products like Zapier and IFTTT sprung up in the early 2010s to integrate and automate all sorts of web services, yet these platforms, at least before they also started doubling down on AI, largely were the domain of power users. Until recently, no-code workflow automation still felt a lot like imperative programming, just with more dragging and dropping.

There’s a recurring theme of LLMs increasing usability by way of natural language interfaces, and workflow automation is no exception. For example, Bardeen lets you declaratively create workflows from natural language prompts, which the underlying AI converts into a series of steps. Similar to the other use cases we discussed, generative AI wildly expands the field of people who can perform a given task.

Still, discoverability is an ongoing problem—recording an Excel macro isn’t that hard, but people still don’t know when to use it. Bardeen’s AI solution is to proactively suggest workflow automations to users. It runs in the background as a Chrome extension and detects repetitive actions, at which point it creates an automation that a user can just accept.

LLMs aren’t just making it easier to create workflows; they’re also making workflows more powerful. Now, it’s possible to add workflow actions that invoke ChatGPT in the same way that other actions might call on Slack or Salesforce. Argil is a good example of the flexibility this offers and claims to 10x your productivity.

How do you measure “10x improvements”?

From all the examples we’ve covered, it’s clear that generative AI is a huge step forward. It enables everyone from laypeople to experts to level up their skills, produce better results, and do so in less time. But how much better is generative AI than what came before?

It’s a hard question to answer, especially because LLMs come with tradeoffs. They have a non-trivial financial cost and concerns about accuracy are also valid. Not all LLM-powered experiences are created equal, and bad ones can be incredibly frustrating for users—likely a net negative for any business.

At Context.ai, we’re building a platform so that you can have visibility into how users are receiving your LLM applications. Our suite of analytics tools and visualizations helps you track everything from user sentiment to trends about what your users are asking your models (and what your models are saying back). Context.ai gives you confidence that you’re measurably improving your products for a better user experience. To learn more, reach out for a demo today.

Read more