This piece realy made me think, your comparison of data to currency and the 'informational inflation' problem is spot on, really sharp. Given how algorithms dictate so much, do you think we also need to redefine what trust means beyond just precision, perhaps including interpretability or fairness metrics?
In a few years, there will be at least two classes of data: before and after the advent of LLMs.
Everything that came before LLMs can be compared to logarithmic scales of creativity and analysis. Up to that point, everything is merely human and involved solutions that were conceived or created from absolutely nothing. Thus, it would be equivalent to the gold standard of currencies, because you don't have an external agent to help you find the solution. The knowledge extracted from the data is only accurate to the extent that humans assign it value. At this point, your source of truth is pure and auditable data, just like gold when markets lose confidence.
With the advent of mass-market LLMs, we have seen a break in the trust standard. I need to be careful with extreme positions, but the masses will not validate every piece of information that the LLM provides, because they trust that the model is accurate enough not to make mistakes. This is the fiat currency of data and a breakdown of the meaning of the word "trust". Trust is no longer about precision, as the effect of LLMs on creativity will be geometrically scaled, and extremely small creative errors will be acceptable (information inflation). Thus, just as inflation erodes purchasing power, information inflation distorts the concept of "trust" based simply on precision.
Therefore, in the near future, data produced without AI interference will be like gold during the breakdown of trust in stock markets (big tech), because it is pure, auditable, and accurate. But I pose a question to you: for how much longer will we still have gold standard data?
This piece realy made me think, your comparison of data to currency and the 'informational inflation' problem is spot on, really sharp. Given how algorithms dictate so much, do you think we also need to redefine what trust means beyond just precision, perhaps including interpretability or fairness metrics?
In a few years, there will be at least two classes of data: before and after the advent of LLMs.
Everything that came before LLMs can be compared to logarithmic scales of creativity and analysis. Up to that point, everything is merely human and involved solutions that were conceived or created from absolutely nothing. Thus, it would be equivalent to the gold standard of currencies, because you don't have an external agent to help you find the solution. The knowledge extracted from the data is only accurate to the extent that humans assign it value. At this point, your source of truth is pure and auditable data, just like gold when markets lose confidence.
With the advent of mass-market LLMs, we have seen a break in the trust standard. I need to be careful with extreme positions, but the masses will not validate every piece of information that the LLM provides, because they trust that the model is accurate enough not to make mistakes. This is the fiat currency of data and a breakdown of the meaning of the word "trust". Trust is no longer about precision, as the effect of LLMs on creativity will be geometrically scaled, and extremely small creative errors will be acceptable (information inflation). Thus, just as inflation erodes purchasing power, information inflation distorts the concept of "trust" based simply on precision.
Therefore, in the near future, data produced without AI interference will be like gold during the breakdown of trust in stock markets (big tech), because it is pure, auditable, and accurate. But I pose a question to you: for how much longer will we still have gold standard data?