Prof. Frenzel
15 min readAug 30, 2024
What Every Business Analyst Must Know — Part6: Balancing Quality and Pragmatism

Dear Analysts🔍!

If you’re a high performer, perfectionism can be a tempting trap. You want your work to be more rigorous and more insightful than your peers. Ultimately, that should set you apart, right? Whether it’s presenting the best product, identifying the investment strategy with the highest alpha, or developing a machine learning model with the lowest prediction error, you’re convinced that putting in extra time will pay off. An expectation that was deeply ingrained in me in investment banking where 80-hour work weeks are common, and the peer pressure is high. But this mindset is instilled much earlier in all of us. Throughout our entire educational path from high school to grad school, anything less than an ideal solution is met with point deductions.

But then, one day, you step into the real world — perhaps into your first empirical research study dealing with real data or, if you’re lucky, a role at a high-growth company — and reality hits. It’s messy, confusing, non-linear, and no perfect solution. You quickly learn that getting things done often matters more than achieving perfection. There will never be enough time, data, budget, or smart people in the room to get to this 100% accuracy.

“I’ve missed more than 9,000 shots in my career. I’ve lost almost 300 games. Twenty-six times I’ve been trusted to take the game-winning shot and missed. I’ve failed over and over and over again in my life. And that is why I succeed.” — Michael Jordan

There’s no shortage of advice out there telling you why perfectionism is your enemy. Every startup pitch workshop I’ve attended emphasizes the importance of MVPs (Minimum Viable Products): Build just enough to learn if you’re onto something or wasting your time.

While I understand the rationale behind this and have certainly experienced the pitfalls of over-optimizing in the startup battlefield, I have to disagree with the often quoted statement that “done is better than perfect.”. Half-baked analytics might lead you down the totally wrong path, wasting time and money. Cutting corners on data quality or analysis can snowball into much bigger problems down the road. Over the past decade, I’ve seen how the ‘move fast and break things’ mentality can backfire spectacularly when applied to sensitive domains like finance or healthcare. Key insights or complex models that crumbled under just a few critical questions because they were rushed to completion. Investment strategies or new apps that failed right after launch because the teams didn’t take the time to properly validate their assumptions. Many of these failures could have been avoided with more deliberate planning and testing.

There’s a range you need to determine between ‘done’ and ‘perfect’ that’s appropriate for your specific context. The point where the marginal benefit of further refinement is outweighed by the opportunity cost. So, in this article, I’m going to break down my personal playbook for tackling this tricky balance. It’s a set of questions and guidelines I’ve pieced together over the years, and it has helped me navigate this whole ‘done vs. perfect’ minefield more times than I can count. And while I’m a data nerd at heart, it’s pretty versatile — no matter what your job title is, you’ll probably find something useful here. I’ve mentored in investment, data analytics, and data science for almost a decade now, and I keep seeing these bright-eyed rookies and even seniors either skimming over the math and stats or going way overboard trying to build the ultimate model. Next thing you know, they’re stuck in analysis paralysis.

Analysis Paralysis

The Case for Pragmatism: “Done is Better Than Perfect”

In my first year as a quant investor, I learned the hard way about the pitfalls of perfectionism. I took over the systematic multi-asset strategy of the firm, and my first goal was to backtest and verify every single parameter of the forecasting and optimization algorithms. With over $300 million assets under management, even a small alpha would have led to a significant absolute amount of financial gain. And every simulation, every possible path would bring me closer to this alpha — or so I thought. I remember asking my entire team — and even the neighboring private equity team — to run my backtest algorithms after hours so I could reach approximately 6 billion historical backtests. Cloud computing wasn’t a thing back then, and my tech stack was not where it is today. Despite all this extra effort and computational power, the results were only marginally better. About 6 weeks later, after several team meetings and additional attempts to fine-tune the models, the incremental improvements didn’t justify the enormous amount of time and energy invested.

This experience taught me a valuable lesson:

perfect is an unachievable state. It’s a moving target that keeps shifting as you approach it, leaving you perpetually unsatisfied and often burnt out.

In theory, perfection seems like the ideal target — who wouldn’t want to deliver flawless work? Precision is often considered synonymous with perfection, but you can only be as precise as your tools, skills, and the laws of physics allow. So, what does striving for perfection actually do to you?

  • Perfectionism Lowers Your Output: Perfectionists focus on minimizing mistakes, which often leads to spending excessive time polishing deliverables (or every single parameter…). If you invest too much time in refining every detail, you produce less overall, which ultimately hinders your ability to meet deadlines or take on new projects. Think about the opportunity cost!
  • Perfectionism Limits Your Growth Opportunities: When you’re constantly trying to avoid mistakes, you’re likely to stick to what you know — your comfort zone. This is particularly risky in a fast-evolving field like data science, where new technologies like GenAI are rapidly changing the way we work. If you’re afraid of making errors, you might avoid learning new tools or taking on different roles, which ultimately stunts your career growth. Many academics or senior managers I’ve worked with over my career are stuck in this mindset. Convincing them to try a new approach is particularly challenging, so try to accept that mistakes are part of the process.
  • Diminishing Returns on Time Investment: The time you spend refining a deliverable quickly reaches a point of diminishing returns. After a certain level of polish, each additional hour invested yields minimal improvements. In high-growth environments, the time spent striving for perfection can cost you your competitive edge. I once had a colleague who spent weeks fine-tuning a model, only to find that market conditions had changed, rendering much of his work obsolete.
Diminishing Returns on Time Investment
  • Difficulty Making Decisions with Incomplete Data: Perfectionists often struggle with making decisions when they don’t have all the information. What if you get more data? What if you test the interaction between more sets of data? This mindset can cause you to miss key deadlines as you endlessly search for more data. In data analytics, where decisions frequently need to be made with partial or incomplete data, this can lead to paralysis. Perfectionists drag out the decision-making process, hoping to gather more information or conduct further analysis to reduce the risk.
  • Becoming a Blocker for Others: Finally, perfectionists can also inadvertently block others by picking apart their proposals without offering constructive alternatives. This behavior slows down team projects and can create a culture of fear, where colleagues are hesitant to share ideas that might be criticized rather than developed further.

This shift in mindset doesn’t come easily, especially if you’re used to the academic rigor of perfecting every detail before submitting your work. But in fast-paced industries, the ability to make decisions quickly, manage risks effectively, and keep moving forward is often more valuable than delivering a perfect product. Embracing pragmatism — knowing when “good enough” is actually enough — is essential for staying competitive and continuing to grow. That doesn’t mean you’re settling for mediocrity; it means you’re being smart about where to invest your time and energy. As I often tell my team, aim for excellence, not perfection. Excellence is achievable and sustainable; perfection is neither.

The Quality Conundrum: Being “Done” Too Early

As a manager, I’ve experienced firsthand the pitfalls of prioritizing speed over quality. In one project, I asked my team to create an economic forecast model to support our strategic asset allocation model. I assumed, eager to prove themselves, they dove straight into building the predictive model, relying on assumptions rooted in historical market behavior. The team didn’t question these assumptions, assuming that past trends would continue as they always had. The model was technically “done” and they delivered ahead of time. At first glance, the model seemed to work well. But which backtest doesn’t look good after some fine-tuning? However, it soon became clear that they had overlooked recent market shifts — particularly a growing consumer preference for sustainable investments that hadn’t been a factor in previous years. The out-of-sample tests revealed a significant increase in prediction errors. Had we used this model, our clients would have suffered avoidable losses.

In another project, we were developing an AI-powered customer chatbot for an external client. The client pressured us to add a new feature quickly, so we decided to skip thorough in-depth testing and optimization, believing we could refine it after launch. This proved to be a costly mistake. The chatbot ended up providing incorrect responses, leading to frustrated customers and a temporary surge in support requests. We had to bench the bot for a while, potentially damaging our company’s reputation. Not as bad as Microsoft’s Tay, but still a significant setback. For context, Microsoft’s Tay was an AI chatbot that was quickly taken offline after it began posting offensive and inappropriate content on Twitter, demonstrating the risks of insufficient testing and safeguards in AI systems. In the end, we spent more time and resources fixing the issues than if we had done it right the first time.

So just being “done” isn’t always better than doing nothing at all. The mantra “done is better than perfect” often suggests that any progress is valuable, but this view can be misleading. In reality, something that is “done” can fall anywhere on a spectrum — from nearly perfect to barely adequate, or even downright bad.

The problem with the “done is better than perfect” mindset is that it assumes a linear progression where every step forward holds a similar value. This assumption of linear progress is rarely true in practice, especially in the field of data science. Consider, for instance, the development of a machine learning model: Your basic ML model might be trained in minutes, achieving decent accuracy. But properly cleaning the data, handling outliers, validating assumptions, feature engineering, or fine-tuning hyperparameters to turn an unreliable model into a robust, production-ready solution could take weeks.

Turns out, slapping a “done” sticker on your work too quickly can lead to some pretty problematic situations. Here are some of the most significant risks of being “done” too early:

  • Burning Bridges with Stakeholders: Delivering work that lacks precision and consistency can erode trust between you and your stakeholders, whether they are clients, managers, or other teams. Trust is the foundation of any successful professional relationship, and once it’s compromised, it’s incredibly difficult to rebuild. When stakeholders lose confidence in your ability to deliver quality work, future recommendations — even if they are of higher quality — might be met with skepticism or outright rejection.
  • Risk of Confirmation Bias: When you’re racing to the finish line, it’s easy to see only what you want to see. You might find yourself cherry-picking data that fits your initial assumptions and brushing off anything that doesn’t quite fit. This kind of tunnel vision can lead you down a path of some seriously misguided decisions.
  • Cost of Rework: Here’s a fun fact: rushing through work often means you’ll be doing it twice. The time you spend going back to fix issues? It usually far outweighs the time it would’ve taken to do it right the first go-round. Rework can also frustrate your colleagues and stakeholders, who might have to pause their own work while waiting for you to correct mistakes that could have been avoided.

In the context of predictive analytics and machine learning, you often hear terms like in-sample or out-of-sample testing, or the more modern equivalents — training, testing, and validation. While these testing phases might seem to slow you down initially, they’re actually your fast track to long-term success. They help you catch and correct issues early, saving you from the much larger time sink of major reworks down the line.

Striking the Balance: A Framework for Decision-Making

Knowing when to stop refining your work and when to push forward can be as important as the analysis itself. It’s a balancing act that requires a nuanced understanding of both the project’s goals and the environment in which you’re working.

Avoiding Perfectionism: Recognizing Diminishing Returns

To avoid falling into the perfectionism trap, it’s important to recognize when your efforts are no longer producing significant gains. One effective way to do this is by setting clear objectives and checkpoints at the start of a project. These checkpoints act as predetermined evaluation points where you assess whether the work has met the necessary standards to move forward or if further refinement is needed.

Here’s a simple framework to help you identify when it’s time to move on:

1️⃣Set Clear Objectives: At the outset of any project, define what “good enough” looks like. This involves setting specific, measurable goals for accuracy, precision, or other relevant metrics (remember SMART goals?). Having a clear understanding of the project’s objectives helps you determine when further refinements are no longer adding value. For example, in a customer churn prediction project, you might set an objective of achieving 85% accuracy. Once you reach this goal, additional tweaks might yield diminishing returns.

2️⃣Establish Checkpoints: Break down the project into stages, each with its own set of deliverables. At each checkpoint, evaluate whether the work meets the criteria for quality and usefulness. If it does, move on to the next stage. If not, make the necessary adjustments before proceeding.

3️⃣Timeboxing: Allocate a fixed amount of time to work on each stage of the project. This practice, known as timeboxing, helps prevent you from spending too much time on one aspect of the project. Once the time is up, reassess whether additional work is truly necessary or if the current state is sufficient to meet the project’s goals. For example, you might allocate two weeks for data cleaning and preprocessing, three weeks for model development, and one week for final testing and validation. This structure helps prevent endless tinkering in any one phase.

Timeboxing

4️⃣Iterative Development: Consider adopting an iterative development (agile) approach, where you release initial versions of your work to gather feedback and make refinements in subsequent iterations. This method allows you to deliver results in a timely manner while continuously improving the quality of your work. By breaking the project into smaller cycles, you can balance the need for thoroughness with the reality of deadlines.

5️⃣Make Constructive Recommendations: When you reach a decision point, provide a clear recommendation and state your confidence level. It’s important to communicate what will happen if your recommendation turns out to be wrong. Be transparent about the key assumptions on which your decision is based. If others disagree with these assumptions or if new information emerges later that challenges them, you’ll be in a position to adjust your course accordingly.

6️⃣Embrace Pragmatism in Presentation: It’s easy to get caught up in polishing every deliverable to perfection, but this can backfire in a fast-paced environment. People will notice if you’ve spent excessive time perfecting an internal document at the expense of more impactful work. Instead, focus on making your work clear and understandable. While it’s important not to submit something completely unformatted, spending just a few minutes to make a document easy to digest is often more valuable than chasing perfection.

Avoiding the Pitfalls of Stopping Too Early

Stopping too early often happens when you’re under pressure to meet tight deadlines or when you’ve become overly focused on the idea of getting something out the door quickly. To counteract this, it’s important to maintain a high standard of quality, particularly in areas where precision is critical.

Here’s how to avoid stopping too early:

  1. Prioritize Critical Elements: Identify the components of your project that have the greatest impact on its success. These are the areas where you should focus your attention and ensure that they meet the highest standards of quality. For example, in predictive analytics, this might mean thoroughly validating your model’s performance on out-of-sample data before considering it “done.”
  2. Conduct Rigorous Testing: Before declaring a project complete, subject it to rigorous testing. This might involve cross-validation, sensitivity analysis, or scenario testing to ensure that your model or analysis holds up under different conditions. If the results are consistent and meet the predefined criteria, you can confidently move forward.
  3. Seek External Feedback: Sometimes, it’s hard to gauge the quality of your own work because you’re too close to it. Seeking feedback from colleagues or external experts can provide a fresh perspective and help you identify areas that may need further refinement. This step is particularly valuable if you’re unsure whether your work is truly complete. My advice: Consider implementing a peer review system where colleagues evaluate each other’s work. This not only improves the quality of individual projects but also fosters a culture of continuous learning and improvement within the team.
  4. Leverage Automation Tools: To ensure that you’re maintaining quality without over-investing time, consider leveraging automation tools. GenAI, automated testing, model validation libraries, and report generation can help establish a baseline level of quality, reducing the need for manual polishing and freeing up time for more strategic tasks. Automation can be particularly useful in preventing both perfectionism and the pitfalls of stopping too early. But automation is not a silver bullet! You must understand the underlying processes first before you automate (read documents, implement verification processes, etc.) and regularly review the output of automated tools to make sure they’re not propagating errors or biases.
  5. Evaluate the Impact of Potential Errors: Think about the possible outcomes of mistakes in your work. If an error might cause major financial losses, harm your reputation, or raise ethical issues, take extra time to verify accuracy. If a small mistake has little impact, it might be more practical to keep moving and fix problems as they come up.

Ultimately, the goal of analytics and data-driven decision-making is to deliver work that is both timely and reliable. The key to effective decision-making in this field is to remain focused on the project’s goals while being mindful of the resources at your disposal. Whether you’re refining a model or preparing a report, the ability to recognize when to stop and when to push forward will ultimately determine the success of your work.

Practical Questions to Navigate the Middle Ground

I’ve compiled a set of questions to guide your decision-making process:

📌Ensuring Quality and Reliability:

  1. Is the model theoretically sound and mathematically rigorous? While theoretical soundness is important, obsessing over mathematical perfection can lead to analysis paralysis.
  2. Have all potential edge cases been accounted for? Considering edge cases is valuable, but trying to account for every possible scenario can be a never-ending task. Focus on the most likely and impactful edge cases.
  3. Are the results reproducible and well-documented? Reproducibility is a cornerstone of good science, but don’t let documentation become an endless task. Aim for clear, concise documentation that allows others to understand and replicate your work without getting bogged down in excessive detail.
  4. Has the model been tested on out-of-sample data? Out-of-sample testing is essential for validating your model’s performance. This step shouldn’t be skipped, but be cautious about endlessly tweaking your model based on test results. Strike a balance between model improvement and timely delivery.

📌Assessing Impact and Decision-Making:

  1. Does the analysis answer the original business question? This question grounds your work in practical utility. Be sure your analysis meets the business need without expanding the scope beyond what’s required.
  2. How sensitive is the decision to the analysis? If increased accuracy won’t notably influence the outcome, stick with a rough estimate. For example, when evaluating potential revenue for a new business, knowing whether it’s in the range of $100M or $1B might be enough to make a go or no-go decision, rather than pinpointing an exact figure.
  3. Is the decision reversible? Distinguish between decisions that are “one-way doors” and those that aren’t. Spend more time analyzing choices that are costly or hard to reverse, while making quicker, informed decisions on more flexible matters. For example, selecting a cloud provider for a major infrastructure overhaul is harder to reverse than experimenting with a new marketing campaign.
  4. What is the expected financial cost of being wrong? Consider the potential financial impact of errors in your analysis. If the cost of reversing a decision based on your work is high, it warrants more scrutiny. This could include wasted engineering resources or investments in incorrect tools.

📌Communication and Practicality:

  1. Can the results be explained to non-technical stakeholders? Clarity in communication is vital, but don’t sacrifice accuracy for simplicity. Aim for explanations that are accessible yet maintain the integrity of your findings.
  2. Is the analysis actionable given current resources? Practicality is key. An analysis that can’t be implemented due to resource constraints, no matter how brilliant, may not be valuable. Consider the feasibility of your recommendations within the current business context.
Prof. Frenzel

Data Scientist | Engineer - Professor | Entrepreneur - Investor | Finance - World Traveler