Designing an O2O Dispatch Algorithm for Startups: Balancing Complexity and Feasibility

In the context of the internet industry, "algorithms" are often imbued with excessive imagination: they seem capable of solving efficiency issues, balancing fairness, eliminating redundancy, enhancing user experience, and even reshaping the order of an industry.

However, for startups or projects in the 0-to-1 phase, this imagination often outpaces reality. Resources are limited, data is scarce, development capabilities are constrained, and the business itself is constantly undergoing trial and error and iteration.

At this stage, overly complex algorithms not only fail to bring higher certainty but instead become internal friction that hinders decision-making and product progress.

Recently, I used AI to design a dispatch algorithm for a hospital logistics service platform. During my collaboration with ChatGPT, I gradually realized a simple yet important truth: a startup project does not need to pursue a "perfect scheduling system" but rather a rule-based system that is stable enough, explainable, maintainable, and can be implemented in the shortest possible time.

Complexity is not the answer; principles come before algorithms.

Before delving into algorithmic discussions, it is necessary to outline the practical context of the business.

The client for this project is a traditional hospital logistics service company, primarily offering offline services such as patient care, cleaning, and transportation. Essentially, it is a typical offline operation-driven enterprise without an internet DNA:

  • No online operation experience;
  • Internal systems rely on manual scheduling;
  • Limited informatization, with discontinuous and non-standardized data collection;
  • Not a fully internet-based business, with no need for internet traffic operations;
  • Management's understanding of algorithms and rule-based systems comes more from intuition than model-based thinking.

This means that any complex algorithm cannot truly be implemented—not because the algorithm is not advanced enough, but because the organization itself cannot handle such complexity. Therefore, the primary premise of designing a dispatch system is not "whether it can be done" but "whether the client can use it, maintain it, and understand it."

Given the organization's operational capabilities, over-engineering will not improve efficiency but will instead make business implementation more difficult.

Only by respecting the realistic boundaries of operational capabilities can algorithms move beyond theoretical plans and become systems that truly function within the hospital.

This information is not something the client will proactively provide; it must be discovered during communication about requirements and prioritized in the business design.

During my collaboration with ChatGPT in designing the algorithm, I consistently adhered to two simple but often overlooked principles: usability and cost-consciousness.

Usability means two things:

  • The algorithm must be quickly implementable under the existing organizational and technical conditions;
  • The operations team must be able to understand, master, and maintain it, rather than being "held hostage" by complex rules.

Cost-consciousness is equally critical:

  • On one hand, controlling development costs and future technical maintenance costs;
  • On the other hand, avoiding placing additional burdens on operations, such as excessive reliance on metric collection, data cleaning, or manual intervention.

These two principles are, in fact, a realistic respect for the limited resources of a startup and determine whether the algorithm can ultimately enter the production environment.

Unnecessary Complexity: Those "Good-Looking Solutions" I Immediately Rejected

During the back-and-forth discussions with ChatGPT, some "seemingly reasonable" advanced methods were quickly rejected by me. The reason is simple: they look good but are useless and unnecessary. They do not bring positive benefits to the business but instead increase technical and operational costs.

ChatGPT's "Smarter" Proposal: A Dual-Smoothing Model Based on Distance + Ratings

Initially, ChatGPT offered a seemingly standardized suggestion:

  • Use a distance-based smoothing algorithm to give higher weight to those closer to the order location;
  • Use fine-grained scoring based on user ratings to more accurately reward or penalize performers based on their performance;

This is a typical scheduling framework commonly used in urban delivery scenarios, but in my business, it is entirely a "mismatched design."

  1. Distance-Based Scheduling: Meaningless in the Hospital Context and Adds Costs

I quickly rejected the distance-based matching scheme for three reasons:

  • Hospitals are small, enclosed, and fixed spaces: There is no need to calculate differences of two or three kilometers like in food delivery; within a hospital building, 20 meters versus 80 meters makes little difference;
  • Service personnel do not move across hospitals: There is no need for cross-city or cross-region scheduling, eliminating the necessity of "selecting the optimal location";
  • Introducing distance means additional system costs: It requires integrating geographic data, continuously collecting coordinates, handling positioning errors, and maintaining map data.

Distance algorithms look good but do not generate business value; they only increase data and development costs.

  1. Overly Precise User Rating-Based System: Too Small a Sample, Meaningless

Another proposal was a weighted system driven by user ratings, such as:

  • Considering historical five-star ratios;
  • Introducing complaint decay coefficients;
  • Building a service scoring model;

This was also rejected by me. Because the platform is small, with a limited number of service providers:

  • Small sample size → high data noise;
  • Small differences → fine-grained scoring cannot truly create gaps;
  • Occasional complaints → can lead to model misjudgments;
  • High precision → high operational costs, requiring explanation, review, and adjustment.

At this scale, complex scoring is meaningless. Precision is not the goal; controllability is.

These "rejected advanced solutions" share a common trait:

They seem designed more to showcase technical capabilities than to solve business problems.

And this is precisely the pitfall that startups or early-stage projects are most likely to fall into.

Complexity is not the answer; survival is the first principle.

Mature platforms often rely on massive amounts of data to train models, use dynamic scheduling systems to predict supply and demand changes, and fine-tune strategies within minute-level time windows. For startup projects, such solutions have almost no realistic foundation.

More importantly, the business's need for "perfect matching" is far less urgent than imagined. Scenarios like hospital patient care, medical transportation, and cleaning services are essentially labor-intensive services. The goal of scheduling is not precise prediction but reducing wait times, avoiding extreme situations, and maintaining service quality stability. Rather than building intricate models, it is better to ensure that the most basic rules do not fail.

Three Core Goals Form the Real Constraints of the Algorithm

In all dispatch discussions, we consistently circled back to the same three goals:

  1. Efficiency: Are orders accepted and executed promptly?
  2. Fairness: Will newcomers be left without orders for extended periods due to system bias?
  3. Quality: Can service personnel maintain basic standards and avoid accidents and complaints?

An effective dispatch system does not aim to maximize each goal but ensures that the tension among the three remains controllable. What a startup project fears most is a single-point imbalance—for example, high efficiency but severe monopolization, or strong fairness but overall completion times being dragged out.

The so-called "algorithm" is more about making the trade-offs among these three adjustable and explainable, rather than deriving an unaccountable score through a mysterious model.

Replacing "Complexity" with "Explainability"

We initially discussed a seemingly sophisticated fairness adjustment method: penalizing service providers whose order counts in the past seven days were significantly above average to avoid monopolization. At first glance, it seemed reasonable, but several issues quickly emerged:

  • The average would be structurally inflated by highly active providers, worsening the situation for less active ones;
  • Linear adjustments could not reflect the real differences between newcomers (0 orders) and moderately active providers;
  • The formula was complex, making it difficult for business stakeholders to understand and explain to service providers;
  • For early-stage businesses with small order volumes and high volatility, such a strategy could easily produce counterproductive effects.

These issues are not technical challenges but natural conflicts between the "early business environment" and "complex algorithmic logic." The insight we ultimately gained was: Algorithms should not exceed the actual maturity of the business.

Thus, fairness adjustment was simplified into a more moderate and easier-to-understand nonlinear weighting:

  • Newcomers receive gentle support;
  • Moderately active providers maintain normal status;
  • Overly active providers are slightly weakened but not excessively intervened;

It does not pursue mathematical elegance but possesses genuine usability.

A "Startup-Level" Minimum Viable Dispatch Model (MVP)

The final algorithmic structure is almost simple but highly flexible:

Score = W1 * (1 / ETA) + W2 * Service Quality Score + W3 * Fairness Adjustment Coefficient + W4 * Real-Time Status (whether idle, whether overtime, whether skills match)

Its advantage lies not in "intelligence" but in:

  • All weights can be directly understood and adjusted by business stakeholders;
  • No need for large amounts of historical data or model training;
  • Ability to quickly locate issues when anomalies occur;
  • Can evolve gradually with business growth rather than being solidified all at once;
  • Extremely friendly to small-scale businesses.

In the real world, such simple models are often more operable than seemingly "smart" complex solutions.

The Essence of the Dispatch Process is "Flattened Decision-Making"

If the algorithm determines "how to score," then the dispatch process determines "how to execute." Overly complex scheduling processes push rules into the murky waters of inexplicability and make the system fragile.

A direct and clear dispatch process is as follows:

  1. Order triggered
  2. Filter candidates with matching skills and reasonable distance
  3. Calculate Score and sort
  4. Push the order to the highest-scoring candidate first
  5. If not accepted within the timeout, push to the next candidate in sequence

This is the most implementable and robust scheduling chain for a startup project. It may not be perfect, but it is reliable enough, transparent enough, and easy enough to maintain, avoiding the accumulation of technical debt.

Designing from a Realistic Perspective: A Functional System is More Valuable Than a Dream System

In the end, the insights from this dispatch algorithm extended beyond the scheduling system itself to a re-evaluation of startup product design approaches:

Do not pursue the "ceiling" of technology but focus on whether the "floor" of the business is stable.

Startup projects do not need the "most advanced algorithm in the industry" but rather the solution that best reduces risk, minimizes friction, is quickly implementable, and aligns with operational reality. Only when the business truly enters a phase of scaled growth does complexity become meaningful.

Until then, the most worthwhile approach is a realistic restraint. Technology should serve the real business, not become an obstacle to business progress.

This method is not only applicable to hospital O2O but also to other early-stage service-oriented startup projects. As the business grows, the algorithm can be gradually optimized, but the core principles should always remain.