We had ideas. We built. We talked to customers. We changed.
How Research Shaped Our Product Strategy - From Features to Market Fit
When we started MyDecisive.ai (MDAI), we had dozens of potential use cases to address and even more features we could build. The question wasn't what we could do—it was what we should do first. Rather than guessing, we let our users tell us.
Let me tell you a story. We started out and built a visualization tool so that people could see all of the data running through their pipelines; where it was coming from; how it was being processed; where it was going to. A user could use the tool to see everything about their telemetry. We iterated, and refined. We showed it to potential users and found that…. they didn't care.
So who is our user?
First and foremost, before you can create a tool or experience, you need to know who your user is. We have identified three primary personas. 1) There are people who run MDAI clusters. 2) There are people who do OTEL in those MDAI clusters, and 3) There are people who just build services and apps and business functionality who just don't want to have to think about observability more than they have to.
Research has changed our path
We have conducted multiple interviews and surveys with technical professionals across DevOps, security, and development roles. Observability is a complicated field to say the least, and what we heard was eye-opening and fundamentally shifted our product strategy.
We made some initial guesses, but the results validated these guesses. Our users don't just want more dashboards—they want measurable results.
What we learned: Results over visualizations
Our MDAI Prioritization Survey of technical professionals revealed a clear pattern: teams are drowning in data but starving for actionable insights. The top-ranking features (a smaller number is a higher priority) weren't pretty visualizations or complex configurations—they were:
Here's a high-level flow:
- Efficacy metrics for filtration variables (Priority Score: 2.89)
- Global relationship configuration between services (Priority Score: 2.89)
- Cost analytics for logging impact (Priority Score: 3.44)
The message was unmistakable: teams need to prove their observability investments are working, not just see more data flowing through pipelines.
Universal needs vs. role-specific requirements
Another survey confirmed this insight while revealing something equally important: certain needs are universal. 100% of respondents wanted real-time data flow monitoring and pipeline state visualization. But 87.5% also demanded efficacy measurement—they need to know if their efforts are actually working.
This taught us a crucial lesson about market fit: build the universal foundation first, then layer on role-specific capabilities.
From Insights to Action: Our Three-Phase Strategy
The research gave us our roadmap:
Phase 1: Foundation (Immediate)
- Efficacy metrics framework
- Service relationship mapping
- Cost analytics core
Phase 2: Configuration & Compliance
- Variable definition systems
- Automated compliance monitoring
- Flexible deployment patterns
Phase 3: Advanced Operations
- Sophisticated automation
- Advanced visualization capabilities
The path to success
- We start the user experience with the YAML of our user's config files.
- Once the functionality is proven, we introduce a CLI-based experience that gives more functionality and flexibility.
- As we get more and better signals from users, we can introduce a GUI that makes things easier.
The Competitive Advantage of Being Wrong
Perhaps most importantly, the research showed us where our initial assumptions were completely wrong. We thought teams would prioritize pipeline visualization and debugging features. Instead, they ranked these consistently lower, with some scoring 5.0+ on our priority scale (again, smaller numbers good, higher numbers bad).
DevOps teams prefer prevention over troubleshooting. They want automated compliance and measurable ROI, not more places to hunt for problems.
Market Fit Through Listening
This research didn't just inform our feature roadmap—it defined our market positioning. We're not building "another pipeline visualization tool." We're building the results-driven observability platform that provides measurable ROI through service-relationship-aware filtering and cost optimization.
The data showed us that 56% of our survey respondents were DevOps engineers—operations-focused professionals who value efficiency, automation, and system reliability above all else. This operational maturity explains why they consistently chose metrics over dashboards.
The Bottom Line
Research helped us from falling prey to becoming a feature-driven company, but rather a results-driven one. Instead of building what we thought users needed, we built what they actually prioritized. The result? A clear product strategy that addresses real operational pain points rather than theoretical use cases.
Key takeaway: When users consistently tell you they value measurable outcomes over visual complexity, you listen. When 100% of respondents want the same core capabilities, you build those first. When teams prefer automation over configuration, you design accordingly.
Our research didn't just influence our decisions—it fundamentally changed how we think about observability. We went from asking "What features should we build?" to "What problems can we measurably solve?" That shift made all the difference in finding our market fit.
What's the biggest insight you've had when you talked to your customer? Join us on slack: MyDecisive community slack
Alan
The surveys analyzed included responses from DevOps Engineers (56%), Developers (50%), SREs, and other technical roles across multiple organizations. Priority scores were calculated using ranking methodology where lower scores indicate higher priority.