Corporate Strategy

Building the Tercera 30: Where AI Helped and Hindered

At Tercera, we spend considerable time advising business leaders on how AI is impacting the way work is delivered. So we are always looking for ways to take our own medicine and responsibly weave AI into our own processes.

When it came time to develop this year’s annual Tercera 30 report, we knew this was the perfect opportunity to leverage AI in a purposeful way. Over the last four years, we’ve built up a significant amount of data on software vendor ecosystems. This, along with our deep connections with software channel partners, bankers, analysts, and service providers in these ecosystems, gives us a lot to work with. However, we are still a small team without a dedicated research arm. 

We started by looking at our existing processes and where we could leverage the foundational enterprise AI tools we already use (e.g., ChatGPT, Claude) and the AI capabilities embedded within our existing productivity tools (e.g., Google, Box, HubSpot, and Microsoft). These processes include: data collection, data validation, report development, and marketing and promotions.

From there, we looked at how we could use fit-for-purpose AI platforms and agents to do things that would’ve been difficult or impossible to do without giant budgets, or an army of analysts. This included interviewing or doing deep research on smaller, less established software vendors where we may not yet have direct relationships, or getting direct insights from senior enterprise IT leaders to understand how they’re buying, building or adopting AI platforms. 

The two tools we ended up leaning on the most were ChatGPT Deep Research and Bridgetown Research’s agentic research platform

ChatGPT: The Swiss Army Knife for Research 

ChatGPT proved most helpful at filling in basic data in our repository of company information. In batches, we were able to create tables including information such as website, company description, public/private, etc. formatted in a way we could easily import into our existing data set. Of course, there were more than one instances of hallucination or incorrect information, so it still required a fair amount of double-checking data against past data we had collected and other sources. This should require less training in the future based on this year’s usage. We found consistent value in ChatGPT’s Deep Research function to develop a base-level understanding of corners of the market that we were unsure about. 

Bridgetown Research: Agents to Go Wider, Deeper, Faster

We were first introduced to Bridgetown Research through one of our Advisors, and were immediately impressed by their vision to transform how primary and secondary data are gathered and analyzed, achieving scale, speed, and cost efficiencies that were previously difficult or nearly impossible. At its core, the Bridgetown platform uses autonomous AI agents to conduct expert interviews, collect data, and run custom analyses. It’s still early days, and we only scratched the surface of what Bridgetown’s platform can do, but we were impressed enough that we are considering incorporating it deeper into our AI-stack for value creation advisory work. 

Lessons Learned for Using AI in Research

When it comes to incorporating AI deeper into our processes, we learned first-hand just how important it is to experiment. Some of the tools and ways we leveraged AI in this process exceeded expectations. At times, it was more trouble than it was worth. 

It reinforced to us that AI can be extremely powerful as part of the research process. However, its impact is best felt when applied not generally, but purposefully. 

Here are 5 other lessons we learned along the way.

1. ChatGPT is a utility player, but not the best tool for everything

General-purpose GPTs are table stakes for the modern worker. Like Word or Excel, they are a base tool you should reach for by default to draft, summarize, run first-pass research, and structure messy information. In our process, ChatGPT sped up recurring tasks like populating company metadata into import-ready tables and spinning up quick briefs so we could engage experts faster. But just as a growing company eventually moves from tracking pipeline in a spreadsheet to implementing a CRM, you’ll eventually outgrow what a generalist GPT can do on its own. 

The real transformational change in our research process came from pairing a generalist GPT with a purpose-built tool.

Bridgetown Research’s platform expanded our reach by using agents to coordinate expert and buyer interviews and generating targeted insights we could not reliably get from a generalist model alone. Its “Bridget” assistant answered research questions more effectively because it was tuned to market-research workflows and grounded in the data we supplied. 

As scope and complexity increase, you need tools that are designed for the specific job.The model you choose matters: different GPTs handle retrieval, structure, and citation differently, and task–tool fit drives quality. 

2. There is still a role for traditional research tools

AI helped us move faster and cover more ground, but it did not replace the basics. We still leaned on the traditional research and analysis tools. For example, we commissioned a third-party research firm to survey 321 current services leaders to gauge how teams were actually implementing AI and thinking of partnership strategies. 

We also built a straightforward scoring model to evaluate ISVs on service potential and growth. The weightings and KPIs came from proven ways of consolidating and analyzing data, which kept the results anchored in reality rather than a black box. Then came the hands-on work that turns data into insight. We used Excel to clean, weight, and combine variables, and to turn the results into a clear 2×2 grid that made the tradeoffs easy to see. 

Just as important, we talked to people. Interviews and live conversations with services leaders, ISV channel leaders, and analysts gave us context, helped validate surprising findings, and turned the numbers into practical guidance. AI gave us speed and reach. Traditional research tools and real conversations made the output trustworthy and useful. 

3. Challenge the output at each step 

Reviewing any AI output is essential, but it’s just as important to understand the inputs and sources from which the AI tools are pulling. In more than a few instances, we found data was wrong because it was pulled from a source that was wrong and not reliable. A prime example of this was a higher-than-expected rate of misidentifying whether or not companies were publicly traded. Double-checking outputs like this is a given for anyone using AI to produce deliverables. And while this may seem like a no-brainer, Deloitte’s recent blunder shows how easy it is to skip this step in the name of efficiency and speed. 

Another problem we encountered was a feedback loop involving our own data from the report. As we began to query in more specific areas, multiple different AI research platforms began to feed us back data from our past Tercera 30 reports. While upon first glance this was incredibly self-validating, the danger is exactly that. We ran the risk of blindly validating ourselves with stale data and commentary without challenging our narrative based on other resources and data from the past year. Luckily, we were able to identify this and use our previous thought leadership as contextual input, while ensuring we incorporated third-party resources. GPTs and agents are designed to interact with data that already exists and consolidate. Even then, the outputs are not consistently reliable (although accuracy continues to increase). 

4. Agents are powerful, but they can’t replicate a human experience

While agents can do many things, the power of a human relationship remains one thing that they can’t replace. We found great value in using the Bridgetown platform agents to conduct interviews with key market stakeholders, but we also found tremendous value in calling upon our professional relationships to validate (and debate) the selections we made for this list. 

Not only do we know where these individuals stand and what might be influencing their opinions, but through these conversations, we garnered insights that likely would not be shared with an agent that people don’t have a personal relationship with.

Exposure to expert and on-the-ground opinions is not only crucial to our process but the secret sauce to the Tercera 30.

While anyone with enough time and resources can replicate the base process that we have built, it’s tough to replicate relationships and discernment.

In a similar vein, it is dubious to expect agents and GPTs to automatically reflect the decades of institutional knowledge and context that we bring to the analysis and scrutiny of the data we gather for the Tercera 30. Unlike relationships, this can be trained into models over time, but it requires the foresight and structure to do so. Even then, these models will need to be updated by the rapid changes and learnings that we gain over time. 

5. AI cannot run the whole project (yet)

AI gave us speed and reach, but it couldn’t (and we didn’t expect it to) fill all of the gaps across an end-to-end project like this. At this point, AI wasn’t able to design the right models for our goals, drive the sequencing of work across teams, or keep all the moving pieces on track. We still needed people to frame the problem, pick the methods, set quality thresholds, and decide when to pivot. Beyond basic, repetitive tasks, AI still isn’t reliable at carrying that context through the full lifecycle.

The creative side is another limit. While AI can draft copy, suggest layouts, and generate visuals, the final deliverables still required human taste and judgment to tune the voice, ideate and contextualize data visuals, and make choices that fit our brand and audience. The tools are getting better every day, and we will keep experimenting, but they are not ready to own creative direction or produce truly differentiated deliverables by themselves. AI is a strong contributor. It is not the project owner.

The Future

AI tools are moving fast. What was unreliable this cycle could be ready by our next round of research. We will keep testing and adopting the best options for our work, and we will fold standout platforms into the ever-growing list of companies we consider for the Tercera 30.