A growing sense of unease is shaping how professionals engage with artificial intelligence, particularly as its capabilities expand across information creation and execution. Dan Pratl, founder of Quadron, believes this anxiety reflects a deeper structural issue that extends beyond automation and into how value itself is recognized.
“We've reached a point at which the maturation of AI has meant that almost everyone feels insecure,” Pratl says, pointing to a broader disconnect between technological advancement and the systems designed to reward human contribution. In his view, existing frameworks for recognition and financial return have either failed to evolve or have devolved into what he frames as speculative or game-like environments, referencing developments in crypto markets and retail-driven trading ecosystems.
Pratl's central argument is that AI is accelerating a shift that has been underway for years. “AI is very good at commoditizing knowledge and the execution of that knowledge,” he explains. “The scarce resource becomes the last mile, expertise, judgment, deployability of judgment.” As knowledge becomes increasingly abundant and execution more automated, he argues that distinguishing high-quality work from low-quality output becomes significantly more difficult, particularly for non-experts evaluating it.
This dynamic creates what Pratl refers to as a “meta problem,” where the volume of available information continues to grow, yet the mechanisms to verify credibility have not kept pace. “If you're not an expert, all high-quality work looks the same,” he notes, underscoring that current systems offer limited ability to differentiate between accurate insight and confident but unsubstantiated claims.
Within this environment, Pratl argues that visibility often substitutes for credibility. Social platforms, in his assessment, tend to reward attention instead of prioritizing accuracy, enabling what he frames as “the loudest voices” to outperform more rigorous but less visible expertise. “There's no system to reward being right,” he says. “No mechanism to verify individuals quickly and enable non-consensus voices to have a seat at the table.”
Pratl suggests that as AI-generated content becomes more prevalent, the absence of reliable credibility signals risks undermining decision-making across sectors, from business to healthcare. Research has shown that online misinformation and disinformation are estimated to cost the global economy about $78 billion per year, highlighting the severity of the situation.
In response, Pratl proposes a credibility economy, which essentially means a system designed to measure, verify, and reward expertise in a more structured and scalable way. Instead of focusing on output alone, this model shifts emphasis toward judgment and trust. In doing so, it helps create mechanisms that attribute value to individuals based on the quality and impact of their decisions.
Quadron, the company he founded, is positioned as an endeavor to build the infrastructure required for such a system. According to Pratl, this involves three core components.
The first is an enterprise layer that introduces a finishing and cohesive layer for work within organizations. “I have several work productivity platforms, but what I often find missing is a finishing layer for the final, comprehensive use,” he says. This layer, Pratl explains, is intended to ensure that individuals are recognized for applying sound judgment and delivering validated outcomes, instead of contributing to ongoing workflows without clear attribution.
The second component is a verification layer aimed at modernizing how knowledge is structured and shared. Pratl characterizes existing intellectual property systems as outdated and insufficient for the pace and scale of contemporary knowledge exchange. In their place, Quadron is developing mechanisms that allow insights to be exposed and evaluated while maintaining appropriate levels of security.
The third element consists of what Pratl refers to as credibility markets, which differ from traditional prediction markets by focusing on domain-specific expertise. “It's not generalized speculation. You're not betting on external events where you don't understand the odds,” he explains. Instead, these markets are designed to calibrate credibility in real time, connecting individuals with relevant expertise and allowing their judgment to be assessed within appropriate contexts. He adds, “Organizations need context and structure which requires a different methodological approach. Individuals need incentives and rewards to organize their information in that manner. We are building the systems to provide both.”
Pratl's perspective is informed by a career that has spanned law, open-source software, crowdfunding, and crypto, each of which, he argues, revealed limitations in how systems incentivize and sustain meaningful participation. Reflecting on these experiences, he shares, “Many such systems didn't have the structural integrity at the incentives level to exist beyond their original creators, and they'd often lose alignment once initial motivations weakened.”
A more personal catalyst emerged during a medical crisis involving his mother, where access to critical information proved inconsistent despite being technically available. “The information was centralized, but it wasn't truly accessible,” he says, noting a system where incentives did not align with the need to surface actionable knowledge.
The eventual outcome, he notes, depended on informal networks instead of structured systems, a reality he believes is untenable given the tools now available.
In the upcoming years, Pratl argues that the continued advancement of AI will only intensify these challenges unless new systems are introduced to address them. Without mechanisms that reward accuracy and surface credible expertise, he suggests that decision-making processes risk becoming increasingly dependent on visibility or chance rather than informed judgment.
“We're all experts,” he says. “Our expertise is valuable if it's structured and surfaced in the right way.” In his view, the credibility economy represents an opportunity to realign technological progress with human value, ensuring that individuals remain active participants in AI-driven systems while also being recognized and rewarded for the quality of their contributions.