In many companies, people still have to wait for a report or ask a specialist to run a query before they can act on data. At the same time, leaders want analytics that feels as natural as a conversation, shows its work, and fits neatly inside the tools their teams already use. That tension between speed and trust is where Ashish Shubham spends his time.
Ashish Shubham is an IEEE Senior Member, and Engineering Fellow and Vice President of Engineering at ThoughtSpot, where he guides the architectural vision for the company’s AI teams and front end organization. His work on natural language question answering, intelligent search modification guidance, and conversational database analysis has resulted in a family of patents that sit beneath ThoughtSpot products and shape how business users ask questions and see answers at scale. In this interview, we explore how he thinks about agentic analytics, why Spotter matters, and what it takes to turn research into day to day decision making.
Ashish, thanks for joining us. In simple terms, how would you explain your work to someone outside your field?
I tell people that my job is to help companies make smarter decisions without making everyone a data expert. My teams build tools that let developers bring AI driven analytics into the applications people already live in every day. Instead of digging through spreadsheets or waiting for a dashboard, anyone should be able to ask a question in plain language and see clear, trustworthy answers in a few seconds.
What is the big problem you are trying to solve with that work?
The main problem is that insight is still locked away for most people. Data exists in warehouses, lakes, and reports, but the path to an actual decision passes through a small group of specialists. That slows everything down and often filters out the nuance of a question. I want a store manager, a product owner, or a finance analyst to look at the same system and simply ask, “What happened here and why did it change,” and then get a response that respects their context and the data underneath it.
Spotter has become a central part of ThoughtSpot strategy. For readers who have never heard of it, what is an AI analyst agent and how does Spotter work in practice?
Spotter is an AI analyst that lives on top of your data stack. Instead of giving you a fixed dashboard, it holds a conversation with your data. You type or speak a question in everyday language, and Spotter turns that into the right query, runs it on your enterprise data, and then explains the result in a way that feels natural.
Under the hood, it relies on a semantic layer that understands business concepts, a search engine that can work across structured and unstructured data, and large language models that help interpret intent. It also keeps track of previous questions so the experience feels like a conversation rather than a series of disconnected reports. The goal is not just to answer one question, but to stay with you as you dig deeper, compare scenarios, or spot unexpected patterns.
You hold a patent on natural language question answering systems for analytics. How does that invention show up in the way people experience Spotter day to day?
That patent is really about turning a plain language question into the right query without forcing the user to think like a database. The patent describes how a system can take a free form request, generate a set of possible queries, score them using both the words in the question and the structure of the data, and then choose the best candidate before it ever shows a result.
In Spotter, that design shows up every time someone types a question that does not match a column name or table name in a perfect way. The system still has to infer intent, resolve it to the right fields, and avoid surprising joins. It also has to keep the link back to the exact query it ran so answers remain auditable. The invention gives us a disciplined way to do that. Users feel like they are simply asking for insight in their own words, while under the hood the product applies a very deliberate search and ranking process that keeps results both intelligent and predictable.
From a business perspective, what has Spotter changed for ThoughtSpot and for your customers?
For the company, Spotter marks a shift from search driven analytics toward what we call agentic analytics. It is part of roughly eighty percent of new customer agreements at ThoughtSpot and influences about one hundred thirty million dollars of annual company revenue. That scale means the design choices behind it really matter, because they affect how quickly customers adopt the platform and how deeply they use it.
For customers, the impact shows up in how widely analytics spreads beyond a core data team. When Spotter sits inside a sales workspace or a support console, people who never logged into a classic business intelligence tool start asking questions every day. They can ask about pipeline health, ticket backlogs, or campaign performance in simple language, refine the question a few times, and then share a narrative answer with their team. Over time you see fewer static decks and more live conversations with data.
You were recently elevated to Senior Member of IEEE. How does that recognition and your involvement with the broader engineering community influence your view of AI products like Spotter?
Being part of IEEE, and especially being recognized as a Senior Member, reinforces a simple idea for me. Innovation has to sit next to responsibility. The community spends a lot of time on topics like model evaluation, system reliability, and the ethics of automated decision making. Those discussions do not stay in conference rooms. They flow back into architecture decisions.
In practice, that means we are very deliberate about guardrails. For example, we design Spotter so that every answer can be traced to a concrete query and data source. We think hard about row level security and how permissions flow into an agent experience. We make sure there is always a way for a human to review, correct, and teach the system. IEEE values around rigor and public trust help anchor those choices when new tools make it very tempting to ship something flashy without the right safety net.
Your recent paper on Infrastructure As Code in AI engineering looks at reproducible model deployment. How does that research connect to the day to day work of shipping an agent like Spotter?
The paper, which I co authored with my colleague Sayantan Ghosh, explores how Infrastructure As Code practices can bring discipline to AI engineering. We looked at how to treat the entire stack from data pipelines to model serving as versioned, testable code so teams can reproduce an experiment, roll back safely, and understand exactly what changed between two deployments.
That mindset is essential for products like Spotter. An AI analyst agent depends on many moving parts. There is the semantic model, the prompt templates, the ranking logic, the connectors to data sources, and the user interface that presents answers. If you cannot describe and replay the full configuration, you cannot debug surprising behavior or prove to a customer that the system behaved as intended.
So we borrow heavily from Infrastructure As Code. We keep configurations in source control, run automated checks on changes, and treat experiments as assets that can be promoted from development to production in a controlled way. It may sound unglamorous, but it is what lets us move fast without turning every release into a gamble.
For teams that want to embed conversational analytics in their own products, where should they start?
I always suggest starting from the user journey and one real workflow. Pick a specific group, like customer success managers or operations planners, and watch how they currently get answers. Which questions come up again and again? Where do they copy paste data into spreadsheets? Where do they wait for someone else to help? If you can make one of those loops disappear with an embedded assistant, you already have a strong case.
From there, invest early in a clean semantic layer and in governance. It is tempting to wire an agent directly to a data warehouse and let it discover everything, but that usually produces fragile results. When your business terms, metrics, and permissions are explicit, both your analysts and your AI agent have a reliable foundation to stand on.
Looking ahead, what excites you most about the next few years of agentic analytics?
I am excited by the idea that analytics will feel less like visiting a separate tool and more like having a capable colleague in every application. The patents we discussed, the work on Spotter, and the discipline around reproducible AI engineering are all steps in that direction. We are moving toward a world where an operations lead, a marketer, and a physician all expect to have a conversational partner that understands their data, respects their constraints, and can explain its reasoning in clear language.