Viewing AI+Crypto From a Primary Market Perspective
Guest Author: Lao Bai
More than a year since the release of ChatGPT, discussions about AI+Crypto have once again heated up in the market. AI is seen as one of the most important tracks in the bull market of 2024â2025, and Vitalik Buterin himself published an article entitled The promise and challenges of crypto + AI applications, exploring possible directions for future AI+Crypto exploration.
This article wonât make too many subjective judgments, but will instead simply summarize the entrepreneurial projects combining AI and crypto observed over the past year from a primary market perspective. It will examine the perspectives from which entrepreneurs have entered the market, the achievements made so far, and which areas are still being explored.
I. The Cycle of AI+Crypto
Throughout 2023, weâve talked to dozens of AI+Crypto projects, among which distinct cycles can be observed.
Before the release of ChatGPT at the end of 2022, there were few blockchain projects related to AI in the secondary market. The main ones that come to mind are Fetch.AI (FET), SingularityNET (AGIX) and a few other veteran projects. Similarly, there werenât many AI-related projects available in the primary market.
From January to May of 2023 could be considered as the first concentrated outbreak period for AI projects. After all, ChatGPTâs impact was significant. Many old projects in the secondary market pivoted to the AI track, and almost every week in the primary market AI+Crypto projects were being discussed. Similarly, during this period, AI projects seemed relatively simple. Many of them were based on a âskin-deepâ adaptation of ChatGPT, combined with blockchain modifications, with almost no core technological barriers. Our in-house development team could often replicate a project framework in just a day or two. This also led to numerous meetings with AI projects during this period, but ultimately, no action was taken.
From May through October, the secondary market began to turn bearish. Interestingly, during this time, the number of AI projects in the primary market also decreased significantly. It wasnât until the last month or two that the quantity started to pick up again, and discussions and articles about AI+Crypto became richer. We once again entered a period where we could encounter AI projects every week. Half a year later, it was evident that a new batch of AI projects had emerged with a better understanding of the AI track, the landing of commercial scenarios and an improved integration of AI+Crypto as compared to the first wave of AI hype.Â
Although the technological barriers still werenât strong, the overall maturity level took a step forward. It was only in 2024 that we finally made our first bet on the AI+Crypto track.
II. The Track of AI+Crypto
Vitalik Buterin, in his article on âpromise and challenges,â provides predictions from several relatively abstract dimensions and perspectives, as follows:
AI as a player in the game
AI as the interface to the game
AI as the rules of the game
AI as the objective of the game
We, on the other hand, will summarize the AI projects currently seen in the primary market from a more specific and direct perspective.
Most AI+Crypto projects are centered on the core of crypto, which weâre defining as âtechnological (or political) decentralization + commercial assetization.â
Regarding decentralization, there isnât much to say, as itâs all about web3. Therefore, we can roughly divide the categories of assetization into three main tracks:
Assetization of computing power
Assetization of models
Assetization of data
Computing Power Assetization
This is a relatively dense track, as besides various new projects there are also many old projects pivoting. For example, on the Cosmos side, thereâs Akash Network, and on the Solana side, thereâs Nosana. After pivoting, the tokens have all experienced crazy surges, which also indirectly reflects the marketâs optimism toward the AI track. Although Render (RNDR) primarily focuses on decentralized rendering, it can also serve AI purposes. Therefore, many classifications include RNDR-like computing power-related projects in the AI track.
Computing power assetization can be further subdivided into two directions, based on the use of computing power. One is represented by Gensyn, which is âdecentralized computing power used for AI training.â The other is represented by most pivots and new projects, or âdecentralized computing power used for AI inferenceâ (the ability of machine learning models to base decisions or predictions on previously learned data or models).
In this track, we can observe an interesting phenomenon, or perhaps a chain of disdain:
Traditional AI â Decentralized Inference â Decentralized Training
Those from a traditional AI background tend to look askance at decentralized training or inference.
Those focused on decentralized inference tend to disapprove of decentralized training.
The main reason lies in the technical aspect â because AI training (especially for large model AI) involves massive amounts of data. Whatâs even more exaggerated than the data requirement is the bandwidth demand formed by the high-speed communication of this data. In the current environment of transformer large models, training requires a computational matrix composed of a large number of high-end graphics cards like the 4090 series/H100 professional AI cards with communication channels of the hundred-gigabit level formed by NVLink and professional fiber switches. Can you imagine decentralizing this stuff? Hmm âŚ
The demand for computing power and communication bandwidth in AI inference is far less than in AI training. Naturally, the possibility of decentralized implementation is much greater for inference than for training. Thatâs why most computing power-related projects focus on inference, while training is primarily left to major players like Gensyn and Together AI, which have raised hundreds of millions in financing. However, from the perspectives of cost-effectiveness and reliability, at least at this stage, centralized computing power for inference is still far superior to decentralized options.
This explains why those focused on decentralized inference look at decentralized training and think, âYou canât make it happen at all,â while traditional AI views decentralized training and inference as âunrealistic in terms of training technologyâ and âunreliable in terms of inference commercially.â
Some say that when BTC/ETH first appeared, the model of having distributed nodes compute everything seemed relatively illogical (as compared to cloud computing). But in the end, didnât it succeed? Well, that depends upon the requirements for correctness, immutability, redundancy and other dimensions of AI training and inference in the future. Purely in terms of performance, reliability and price, itâs currently impossible to surpass centralized solutions.
Model Assetization
This is also a crowded track for projects, and is relatively easier to understand as compared to computing power assetization because one of the most well-known applications after the popularity of ChatGPT is Character.AI. With it, you can seek wisdom from ancient philosophers like Socrates and Confucius, engage in casual conversations with celebrities like Elon Musk and Sam Altman, or even indulge in romantic talks with virtual idols like Hatsune Miku and Raiden Shogun. All of this showcases the charm of large language models (LLMs). The concept of AI Agents has become deeply ingrained in peopleâs minds through Character.AI.
What if figures such as Confucius, Elon Musk or Raiden Shogun were all NFTs?
Isnât this AI+Crypto?
So, rather than calling it model assetization, itâs more apt to say itâs the assetization of agents built on top of large models. After all, large models themselves canât be put on the blockchain. Itâs more about mapping agents on top of models into NFTs to create a sense of âmodel assetizationâ in the AI+Crypto space.
There are now agents that can teach you English or even engage in romantic relationships with you, among various other types. Additionally, related projects, such as agent search engines and marketplaces, can also be found.
The common issue in this track is first of all that there are no technological barriers. Itâs basically just the tokenization of Character.AI. Our in-house tech wizards can create an agent that speaks and sounds like a specific character (such as our co-founder, BMAN) in just one night by using existing open-source tools and frameworks. Secondly, the integration with blockchain is very light. Itâs somewhat akin to GameFi NFTs on Ethereum, in which the metadata stored may only be a URL or hash, and the models/agents reside on cloud servers. On-chain transactions only represent ownership.
The assetization of models/agents is still one of the main tracks in AI+Crypto for the foreseeable future. In the future, we hope to see projects with relatively higher technological barriers and closer integration with blockchain that are more native.
Data Assetization
Logically speaking, data assetization is the most suitable aspect of AI+Crypto because traditional AI training mostly relies on visible data available on the internet â or, to be more precise, public domain traffic data. This data may only represent a small percentage, around 10â20%, with the majority of data actually lying within private domain traffic (including personal data). If this traffic data can be utilized for training or fine-tuning large models, we can undoubtedly have more professional Agents/bots in various verticals.
Whatâs the best web3 slogan? Read, Write, Own!
Therefore, through AI+Crypto, and under the guidance of decentralized incentives, releasing personal and private domain traffic data and assetizing it to provide better and richer âfoodâ for large models sounds like a logical enough approach. Indeed, there are several teams deeply involved in this field.
However, the biggest challenge in this track is that, unlike computing power, data is difficult to standardize. Using decentralized computing power, the model of your graphics card directly translates into the amount of computing power you have. On the other hand, the quantity, quality and purpose of private data are all difficult to measure. If decentralized computing power is like ERC-20, then assetizing decentralized AI training data is somewhat like ERC-721 â and like having many projects, such as APE, Punk, Azuki and different NFTs with different traits mixed together. The difficulty in liquidity and market-making is much more challenging than with ERC-20. Therefore, projects focusing on AI data assetization are currently facing significant challenges.
Another aspect in the data track worth mentioning is decentralized labeling. Data assetization operates at the âdata collectionâ step, and the collected data needs to be processed before being fed to AI, which is where data labeling comes in. This step is currently mostly centralized and labor-intensive. With decentralized token incentives, transforming this work into decentralized, labeling-to-earn or (similar to crowdsourcing platforms) distributing work is also a viable approach. A few teams are currently working in this area.
III. Missing Puzzles in AI+Crypto
Letâs briefly discuss the currently missing puzzle pieces in this track from our perspective.
Lack of technological barriers: As mentioned earlier, the majority of AI+Crypto projects have almost no technological barriers as compared to traditional AI projects in Web 2.0. Instead, they rely more on economic models and token incentives in user experience, markets and operations. While this approach is understandable, given the strengths of decentralization and value distribution in web3, the lack of core barriers inevitably gives an âX-to-earnâ feeling. We still hope to see more teams like RNDR, backed by companies such as OTOY, with core technologies making significant strides in the crypto space.
Current status of practitioners: Based on current observations, some teams in the AI+Crypto space are well-versed in AI but lack a deep understanding of web3. Conversely, some teams are highly crypto-native but have limited expertise in the AI field. This situation is reminiscent of the early days of the GameFi track, when some teams were well-versed in gaming and sought to transition Web 2.0 games to blockchain, while others were deeply immersed in web3, focusing on various innovative and optimized gaming models. MATR1X was the first team we encountered in the GameFi track that demonstrated a dual understanding of gaming and crypto, which is why I previously mentioned IT as one of the three projects I firmly believed in back in 2023. We hope to see more teams in 2024 that possess a dual understanding of AI and Crypto.
Business scenarios: AI+Crypto is in an extremely early exploration stage, and the various forms of assetization mentioned above are just a few major directions. Each direction has many subtracks that can carefully be explored and segmented. Currently, many projects in the market that integrate AI and crypto feel somewhat âawkwardâ or ârough,â failing to leverage the optimal competitiveness or combinability of AI and crypto. This is closely related to the second point mentioned above. For example, our in-house research and development team conceived and designed a more optimal integration method; however, despite observing numerous projects in the AI track, we have yet to see any teams entering this niche area. So we can only continue to wait.
You might ask why a VC like us can come up with certain scenarios before entrepreneurs in the market can. Thatâs because we have seven experts on our in-house AI team, five of whom have PhDs in AI.Â
Finally, although from the perspective of the primary market, AI+Crypto is still quite early and immature, this doesnât prevent us from being optimistic about 2024â2025, when AI+Crypto will become one of the main tracks of this bull market cycle. After all, is there a better way to combine productivity thatâs liberated by AI with production relations liberated by blockchain?
#Bybit #TheCryptoArk