Launching a product is only the beginning. Post-launch, the real work begins—optimizing, refining, and learning to make the product more effective while incorporating real-world user feedback. At Fractal Labs, the post-launch development of the Practice Interviews platform was a prime example of how a strategic and technical pivot is essential to long-term success. This case study explores the intricate details of how Fractal Labs navigated the challenges of post-launch development, turning initial product assumptions into scalable, user-driven solutions.
When Fractal Labs first launched Practice Interviews, the focus was on developing a Minimum Viable Product (MVP) that would validate their hypothesis: that job seekers could benefit from AI-driven feedback on their interview answers. At this stage, the team was not concerned with building a polished, bug-free platform but rather proving that the concept worked and that people would pay for it.
As Matt Lim, our Principal Product Manager, described, “The first version of the product was about getting something to market quickly to see if users would find value in it.” Fractal Labs took a practical approach to development—focusing on building just enough functionality to test whether people would use and pay for the platform. Once this hypothesis was confirmed through early user feedback and initial sales, the focus began to shift.
But launching an MVP is a far cry from launching a fully-fledged product. Austin, the CEO & Tech Lead, explained, "In the MVP stage, you’re not too worried about scalability or extensive testing. The goal is to move fast and iterate." However, once paying users were on board, the stakes changed. Stability, scalability, and user experience became the new focus.
After the MVP launched, we encountered an important challenge: friction in the user onboarding process. While the core product—an AI system that gives feedback on interview answers—was validated, getting users to complete a practice interview was harder than expected.
As Matt noted, “The first goal post-launch was to simplify the process of getting new users to complete an interview. We quickly realized that setting up an account and getting into an interview flow was a pain point that caused users to drop off before they even got to the core feature of the product.”
This friction slowed down the user journey and prevented new users from experiencing the platform’s value. The team’s immediate response was to streamline the onboarding process, making it more intuitive and reducing the steps required to perform a practice interview. This strategic move significantly lowered the barriers to entry, allowing users to more easily engage with the platform.
Once onboarding was optimized, the next hurdle was retention. Matt explained that, unlike traditional SaaS platforms where users may engage consistently over a long period, Practice Interviews had a transient user base. “People come to us when they’re preparing for interviews. Once they land a job, they don’t have a need for the platform anymore,” Matt pointed out.
This transient nature posed a challenge: how could Fractal Labs keep users engaged and incentivize them to return, even when they weren’t actively interviewing? Matt and the team recognized that to increase customer lifetime value (CLV), they needed to explore features that appealed not only to active job seekers but also to those casually considering career moves or preparing for future opportunities.
Post-launch, they worked on diversifying the user experience, making the platform useful not just for urgent job seekers but for anyone who wanted to stay interview-ready. This required both strategic thinking and an understanding of their user base’s lifecycle.
While the product team focused on reducing friction and boosting retention, the engineering team faced its own set of challenges. For Austin, post-launch development was all about making the platform more stable, robust, and scalable. Initially, during the proof-of-concept and MVP phases, they skipped over unit testing and certain backend optimizations. “When you’re in the early stages, you focus on getting the basics right. But after launch, stability becomes a key concern,” Austin explained.
One of the most significant technical developments post-launch was the creation of modular feedback, a feature that allowed users to receive feedback in smaller, more digestible chunks rather than a large, intimidating block of text. This decision came directly from user feedback.
“In the early days, users would get a huge wall of text after completing a practice interview,” Austin said. “But we quickly learned that this format was overwhelming, especially for less experienced users. People wanted more specific, actionable feedback that they could focus on and improve in stages.”
What seemed like a simple change on the surface—breaking up feedback into smaller sections—required a significant technical overhaul. “The way the system was originally built assumed a one-input, one-output model. We had to change that assumption entirely, moving to a one-input, multiple-output model,” Austin explained. This meant reworking how data was stored, how API calls were made, and how feedback was processed.
Austin and his team had to touch almost every part of the codebase to make modular feedback work. However, the investment paid off. “Now that we’ve made the system more flexible, future iterations—such as adding new feedback modules—will be much easier to implement,” Austin said. This was a key milestone, making the platform not only more user-friendly but also scalable for future developments.
One of the most unexpected post-launch challenges was a DDoS attack that hit the platform earlier than anticipated. “We knew that security issues would arise eventually, but we didn’t expect them to happen so soon,” Austin recalled. The attack forced the team to shift focus from building new features to fortifying the platform’s security. This sudden change in priorities highlighted the need for a more agile approach to development.
“We had to reallocate resources immediately. One week we were working on a new feature, and the next, all hands were on deck to protect the platform,” Austin said. Being able to pivot quickly and adapt to new threats became a crucial aspect of post-launch development.
One of the most rewarding aspects of post-launch development was how user feedback drove feature development. A standout example was the introduction of the Ideal Answer feature. Originally, the team didn’t plan to provide users with ideal answers to interview questions. However, as Matt explained, “Dozens of users kept asking for this feature. They wanted to see an example of a perfect answer in addition to receiving feedback on their own responses.”
This overwhelming demand led the team to create a feature where users could see what an ideal answer would look like, based on the feedback they had received. "It was something we hadn’t thought of during the initial development, but it became one of the most appreciated features," Matt noted.
Similarly, the development of modular feedback was born from user frustration with long, dense feedback. The new, modular system allowed users to focus on specific aspects of their interview responses, such as clarifying questions or structure, making the process of self-improvement much more manageable.
Both the ideal answer feature and the modular feedback system underscored the importance of listening to users and iterating based on their needs.
As Fractal Labs continues to develop the Practice Interviews platform, their focus remains on refining the user experience, improving stability, and expanding the product’s value proposition. According to Lim, “Our big focus now is streamlining the free user experience and building out features that appeal to enterprise clients.”
On the technical side, Austin’s team is continuously working to improve the AI models that drive the platform’s feedback. “As new models come out, we have to rigorously test everything,” he explained. For example, when switching from GPT-4 to GPT-4 Turbo, the team encountered unexpected behavior changes in the way prompts were processed, which required reverting back to the original model temporarily. This constant adaptation and fine-tuning are part of the ongoing journey of post-launch development.
The story of Practice Interviews highlights the complexity and importance of post-launch development. From addressing friction in the user experience to overhauling backend infrastructure and handling unexpected security threats, the team at Fractal Labs has demonstrated the agility required to succeed in this phase.
For any company launching a product, the real work begins after the launch. It’s about learning from users, refining processes, and continuously evolving both the product and the technology that supports it. For Fractal Labs, post-launch development is an ongoing journey of improvement, ensuring their platform not only meets current user needs but is also prepared for the future.
If you’re a company looking to navigate the complexities of post-launch development, the Practice Interviews case study offers a roadmap: listen to your users, invest in scalable systems, and always be ready to adapt to the unexpected.
Actively gather and respond to user feedback. In the case of Practice Interviews, user requests directly led to the development of key features like the Ideal Answer and modular feedback.
Simplifying the user journey, especially during onboarding, can significantly improve engagement and retention. Address friction points quickly to prevent drop-off.
Understand your users’ lifecycle. For products with transient user bases, like Practice Interviews, it’s crucial to find ways to keep users engaged even when their immediate need for the product ends.
When evolving an MVP into a stable, scalable product, invest in flexible backend systems that can accommodate future growth and new features with minimal rework.
Prepare for unexpected threats like DDoS attacks sooner than anticipated. Ensure your infrastructure can be quickly fortified when necessary, and adopt a proactive approach to security.
As new AI models and technologies are integrated, rigorously test to avoid regressions and unexpected behavior, ensuring stability even with seemingly simple upgrades.
Post-launch development requires constant adaptation. Prioritize tasks based on real-time needs, whether it’s user-driven feature development or addressing unexpected security issues.
Building complex systems, like modular feedback, might take more time and resources upfront, but investing in scalable, flexible solutions ensures smoother future iterations and less technical debt.
Plan features and services that keep users engaged during all phases of their journey, whether they are actively using the product or only returning intermittently.
If you have a question or a project in mind, let's talk.