Beyond Functions: Chinese AI Innovators Are Engineering Relationships

17

Peng Zhang, founder and president of GeekPark, presents a deep, forward-looking interpretation of what it means to build AI-native products and companies in a time when both human and machine intelligence are evolving in tandem. His reflections stem from years of immersion in the startup ecosystem, and from observing how AI’s rapid ascent is changing product, distribution, and organizational logic.

He begins by framing the need for cross-dimensional products as an inevitable consequence of two forces: the intrinsic requirements of AI systems, and the psychological and behavioral needs of users. AI, by nature, operates in a world of screens, symbols, and abstractions. But human value and emotion often reside in the world of physical presence and multisensory feedback. 

To bridge this, some entrepreneurs have turned to hybrid solutions, such as wearables that combine sensory input (e.g., heartbeat, body temperature) with actuated output (e.g., haptic feedback), in order to deliver emotionally supportive experiences. These designs allow AI not just to observe or advise, but to intervene—adding the possibility of richer, more intuitive interactions that feel physically grounded and emotionally resonant.

From the user’s side, cross-dimensionality is equally essential. Products that incorporate both hardware and software—like Fuzzi, a smartphone accessory with emotional feedback capabilities—show that emotional value can be layered on top of functional utility. Tangible products increase presence and memory in the user’s life, reducing the friction of forgotten software apps buried on a screen. The fusion of emotional play and functional design encourages users to return not only because of what the product does, but how it makes them feel. In this context, AI-native products aren’t just tools—they are companions, presence-markers, and trust builders.

These evolving relationships create new opportunities for service distribution. Traditional distribution logic—winner-takes-all platforms, scaled through free services and monetized through ads or commerce—relied on breadth, not depth. But AI-native services are defined by continuity and depth of relationship. Products that solve problems repetitively and improve over time unlock new monetization models, where users are willing to pay for access and usage. This shifts value from platform size to user lifetime value (LTV), and from initial user acquisition to retained engagement. The relationship, if trusted and deepened, becomes a distribution channel itself. As examples, Peng points to Agent-based products that build familiarity and even friendship over time, making pay-as-you-go or usage-based pricing feel natural.

However, to sustain this depth, the relationship must be genuinely constructive. Long-term user engagement cannot be extracted or manipulated; it must be earned. The best AI-native products liberate human nature rather than exploiting its weaknesses. While the tech industry has long been criticized for feeding user addictions or harvesting data in opaque ways, Peng encourages a different aesthetic—one rooted in trust, transparency, and the promotion of human flourishing. Referencing the philosophical counterbalance between the Seven Deadly Sins and the Seven Virtues, he argues that AI products should not merely “understand” human nature to optimize engagement, but also serve as instruments to elevate, support, and gently correct it.

This reorientation has major implications for how product teams build. AI-native products center on human–AI interaction, with the large language model as a probabilistic, somewhat uncontrollable “magic box” at the core. Thus, product value depends not only on the intelligence of the model, but on how it is wrapped: the I/O system, the interface, the behavioral design, and the scaffolding around uncertainty. This is where the concepts of Broad Input and Liquid Outputting emerge.

Broad Input means moving from passive to proactive sensing. AI-native products must incorporate richer context—sensor data, conversation history, user state—so they can know, understand, and anticipate needs. Contextual awareness elevates intelligence from reactive to anticipatory, minimizing user friction and maximizing emotional alignment. The browser product Dia illustrates this: by seeing all the open tabs and synthesizing them automatically, it eliminates cognitive overhead and creates a sense of shared mental space.

Liquid Outputting complements this by replacing static, one-shot outputs with stepwise, co-created journeys. Because model outputs carry uncertainty, the product must guide the user through a collaborative process. Whether it’s Devin prompting for clarification rather than guessing, Deep Research co-designing a research plan with the user, or YouWare’s Vibe Coding starting from remixable templates, each example demonstrates how intentional process design builds trust. AI products must behave like thoughtful partners—sometimes pausing, asking, or adjusting—instead of pretending to be omniscient engines.

At a higher level, AI-native products are becoming human-centered I/O systems. They are not just delivering tools but outcomes—forms of “realization” that represent the next chapter of the personal computing revolution. The skill of building such products involves not only good engineering but also aesthetic judgment and philosophical clarity about the nature of the human-machine relationship.

These product implications naturally lead to shifts in business models and organizational thinking. The traditional model of product economics—grow user base, monetize attention or transactions—no longer applies cleanly. AI-native companies are evolving in three dimensions: not only expanding their user footprint (the horizontal plane), but also increasing the height of their AI capabilities and the depth of their user relationships. This “volume-based” value model rewards companies that invest in deep product scaffolding, meaningful engagement loops, and continuous capability alignment between AI and user needs.

Consequently, traditional startup metrics like ARR or user acquisition may underrepresent long-term potential. Low-frequency or superficial tasks may fail to generate the data needed to improve AI capabilities, while high-frequency, high-value collaboration creates compounding returns. This reframes how teams should think about capital, growth, and even the organizational form itself. In the future, companies may be smaller, more focused, and built around core alignment between user and AI agency. The new scale may not require massive headcounts but precise alignment of vision, modeling, and engineering.

Finally, Peng highlights how management itself must adapt. Management science, shaped in the age of industrial mass production, is now misaligned with the AI-native world where distributed intelligence augments every role. New coordination models, roles, and values will be needed—less about hierarchy, more about creativity, adaptability, and continuous learning. Even concepts like pricing and business structure may change. If AI helps users achieve real outcomes, perhaps new forms of postpaid or performance-based pricing—enabled by smart contracts—could redefine transactions. Perhaps users will not only pay but also earn, creating and consuming value in fluid, non-linear cycles.

Source: 36kr, geek park, qbital