- Name of project: Posthuman AI Marketplace
- Project Website:
Github : https://github.com/PosthumanMarket/Posthuman.py/tree/v0.3
Twitter : https://twitter.com/PosthumanNetwo1
- Proposal Wallet Address:
0x21e06646433954aabace8e3d93d502e423249299
- The proposal in one sentence:
This proposal is to continue our grant to execute the next steps in the development of Posthuman Marketplace. Posthuman allows training and inference of advanced NLP models without viewing model parameters (i.e. ZK-training and inference), using Compute to Data.
- Which category best describes your project? Pick one or more.
- Build / improve applications or integrations to Ocean
Summary of Progress
As promised, with this round we bring to the Ocean community our first public-facing implementation of Zero-Knowledge Model usage on the Rinkeby network. We’ve published two assets:
-DistilGPT2 Model Parameters, for use in ZK-state using C2D [https://market.oceanprotocol.com/asset/did:op:aa82a24d62de1eB817c2173fff8B8c12d7b1Bbf9]
-Self contained inference algorithm, tested for data leakage
[https://market.oceanprotocol.com/asset/did:op:aa82a24d62de1eB817c2173fff8B8c12d7b1Bbf9]
As AI models are non-deterministic, each inference request will result in 5 new, unique completions by the AI of a paragraph from wikipedia. We will be adding all the other algorithms as assets shortly for more complete testing.
Using these, users can get a feel for the power of Zero-Knowledge Model marketplaces on Ocean, without writing any code. One can perform inference and be certain that the same model asset was used to provide inference (verifiability), without being aware of the model parameters.
We’re proud to be the first to bring ZK-Verifiable AI Models to Ocean and the larger decentralized ecosystem.
We will be scaling this up with our own K8 cluser and C2D setup (currently only accessible via API), to handle larger models more efficiently - over the coming month.
In addition, upgrades in hardware and kubernetes architecture funded by the last round enable Posthuman Marketplace (API) to now handle:
-
Model sizes upto 11B parameters (up from 1.5B)
-
Upto 5 concurrent requests can be served simultaneously, the rest are queued
-
Latency under 3 seconds for inference requests
With this, we’re approaching completion of Posthuman v0.3 - the final iteration on the testnet which will be opened to community testing, before moving onto the Mainnet launch. This includes the elements of model access (API/UX), training, inference, and evaluation in a wide variety of settings.
With the very recent addition of Compute-To-Data to Ocean Market, we’ve also begun implementing a custom Marketplace UI based on a fork of the Market repo.
This is currently a WIP. We feel the Marketplace UI will help us massively streamline the functionality, in contrast with the clunky API based access of Posthuman v0.2.
The custom UI will allow no-code, secure access to AI models, opening up an entirely new market for commercial, large scale sale of AI - something that doesn’t exist yet.
Further details, including links to updated code, and a walkthrough of the additional functionality, can be found in our Progress Tracking Document.
This grant round seeks to fund the completion of Posthuman v0.3, including integrating all the API functionality with a custom Marketplace Frontend, scaling and public testing. Once complete, we look forward to the mainnet launch by the end of the month, upon successful testing. We’re on track to meet the targets laid out in our previous proposal (copied below)[edit: grant deliverables for this grant added].
Project Overview
- Description of the project:
Large transformer models have major commercial applications in audio, video, and text based AI. Due to the high cost of training and inference, it is not possible for most developers to utilise their own models, and thus they rely on centralised API access- which can be revoked at any time and comes with price and privacy concerns.
Posthuman tackles the following Problems:
-
Ownership of Model Parameters of large transformer models is a crucial issue for developers that build on these models. If the API access can be unilaterally revoked or repriced, it makes for a very weak foundation for AI-based businesses.
-
Next, there is a question of verifiability of claimed loss scores: it is nearly impossible to verify if a particular centralised API request was actually served by the model promised, and not a smaller, cheaper model.
-
Further, private ownership of models gives rise to a culture of closed competition rather than open collaboration: every improvement on the model requires express permission, and the use of a model so improved is also entirely permissioned.
The Solution:
Posthuman is a Marketplace based on Ocean protocol that allows users to buy compute services on large NLP models. Model Providers contribute funds to train useful models, and Model Consumers purchase inference and evaluation on the models they find most useful. With Posthuman v0.2, Users can now train, infer, and evaluate on any arbitary text data.
Posthuman’s decentralised architecture achieves three goals that are impossible with centralised AI providers:
- Verifiable Training and Inference: The end user can know for sure which model served a particular inference request
- Zero-Knowledge training & ownership: The marketplace controls the models, ensuring each person who contributed to training is rewarded fairly, as all value created by these models remains on-chain and cannot be ‘leaked’.
- Censorship-Resistant Access : Access to AI is fast becoming a basic necessity for a productive life, however such access can easily be censored by centralised providers. With a decentralised alternative, any holder of crypto is guaranteed to be treated equally by the protocol.
Value for Ocean
Ocean protocol will form the backbone of zero-knowledge model publication on the Posthuman Marketplace. Additionally, all inference requests will be on the ocean network due to the decentralised and zero-knowledge nature of the model- It will not be possible for an individual to run inference on a published model outside of the ocean ecosystem.
Ocean’s Value for Project
Ocean Protocol provides a market for dataset assets, compute and algorithms. Specifically, data-providers can expose their data for ‘compute-eyes only’, ensuring no data leaks. Here we apply this principle to share trained parameter values to further ‘compute eyes’ only for inference and fine-tuning, preserving the secrecy of model parameters and allowing repeated rewards to flow to those who participated in training it.
- What is the final product (e.g. App, URL, Medium, etc)?
Posthuman Market will be a webapp, that serves various AI models as C2D data. Posthuman v1 will include NLP models, including all state-of-the art transformer models developed since the advent of BERT.
Posthuman tools will also be accessible via API, enabling app developers to directly integrate Posthuman inference in their AI applications.
- How does this project drive value to the Ocean ecosystem? This is best expressed as Expected ROI, details here.
NLP/Transformer models are in extremely high demand since their ‘imageNET moment’ in December 2019, when models of the BERT family exceeded human performance in Reading Comprehension (the SQuAD and GLUE benchmarks). The market for Transformer models is estimated at $30 Billion in 2021, however this is a highly illiquid market - as models cannot be traded without fear of leakage; and inference cannot be offered due to impossibility of any verifiability of model integrity. In short, if trust were to exist between model provider and consumer, that would enable this market to become highly liquid- such as on Posthuman. This means that a large percentage of AI models may soon trade on Posthuman as a way to maximize profits.
Posthuman takes a Marketplace cut of 25%, as Posthuman is also providing the hardware (GPUs) for the AI models to run. In addition, Ocean protocol recieves 0.25% of every datatoken transaction.
Assuming even 10% of NLP models are traded on Posthuman by the end of Y1:
- that would account for over $3 Billion in increased demand for $OCEAN. [OCEAN Datatoken Consuming Volume]
- and over $7.5 million in direct revenue to OCEAN. [Network Revenue]
The Bang/Buck Calculations thus work out as:
Bang = $7.5 million
Buck = $20,000 (+Future rounds)
Probability of success = 80%
Expected ROI = 258.75
Project Deliverables
IF: Build / improve applications or integration to Ocean, then:
- App will be live, at: https://posthuman.finance
- Is your software open-source? Yes.
- Project can be found (with permissive license if necessary) at: https://github.com/PosthumanMarket/Posthuman.py/tree/master_new
Grant Deliverables
Using the May 2021 grant, we aim to complete the following deliverables:-
- Build a custom marketplace frontend for AI inference/training/evaluation
- Integrate all API functionality from Posthuman v0.2 into the marketplace frontend
- Community-test multiple models and algorithms, including distributed fine-tuning, on rinkeby marketplace
- Provide detailed documentation on using Posthuman via API as well as via the frontend
- Publish on Mainnet after sufficient community testing and integrating any feedback
Prior work:
We’ve already deployed v0.1 [single model fine-tuning] and v0.2 [multi-model and multi-dataset fine-tuning] of Posthuman on the Rinkeby testnet.
You can find an overview of our progress so far, including a deep-dive into the functionality, here: https://docs.google.com/document/d/1WUL2cv7jNUDQwq5KHipalmtYRiIds2-vpcPGAR5oxoA/edit
Briefly, in v0.2, we’ve created a prototype allowing inference from any arbitrary NLP model in a verifiable, zero-knowledge state. We provide example scripts for various functionalities, including zero-knowledge ownership, and federated training.
While functional, the codebase has a few bugs and not yet ready for production - especially with regard to scaling the marketplace’s kubernetes architecture to handle multiple concurrent training/inference requests. The code sometimes faces OOM errors in the backend, which we suspect is due to insufficient hardware (we’re using minikube on 1 V100 processor), and lack of load balancing. We plan to address this in v0.3.
Secondly, we plan to integrate filecoin storage and secret network to our v0.2 prototype to add additional privacy and robustness. This will be completed over the next 2 months, including integration and testing cycles.
Third, we’ve also begun developing a custom UI for AI inference, to integrate with Posthuman market.
Finally, we’re working on API documentation to make it easy for developers to integrate AI models from Posthuman into their applications.
Roadmap
The central goals for this grant, deliverable over the next 30-45 days, are:
- Scaling to a production ready deployment (w/ full kubernetes load balancing), and open for community testing.
- Engaging AI companies by offering them a monetization oppourtunity; Onboarding first ~5 corporate customers.
- Thoroughly stress-testing the protocol.
- Develop API based access, including documentation; begin testing frontend UX.
These form a part of our larger plan:
- 20th April : Production-ready Rinkeby deployment, capable of handling concurrent request efficiently - invitation for community testing. [Posthuman v0.3]
- 30th April : Partner with various AI companies for demo, onboard for launch, give advanced credits.
- 15th May : Publication on mainnet after fixing any bugs/errors. Includes API documentation for publishing, training, and using AI models. [Posthuman v1]
- 30th May : Addition of market & inference UI to increase usability; Add filecoin and Secret Network support [Posthuman v1.1]
- June (Tentative) : Market to end users, incentives to develop apps using AI on Posthuman.
Future Plans:
After a successful deployment of the NLP/Transformer based AI marketplace, we plan to expand to include image, speech, and large-scale reinforcement models (like for car driving), all available in the verifiable, zero-knowledge setting pioneered by Posthuman v1.
Project Details
If the project includes software:
- Are there any mockups or designs to date?
Yes, besides the Github, we have the code overviews outlining the functionality in v0.2; and we also have a short video demonstrating the functionality from the command line on rinkeby:
Demo showing how a user can:
-
Monetise a trained NLP model with data tokens
-
Consume custom trained model inference endpoint without accessing model
-
Steps for repeating consumption flow for custom evaluation/training.
Deployed on Rinkeby testnet.
- An overview of the technology stack?
AI Tech Stack
We experimented with many libraries for training large transformers, including DeepSpeed2 for very large models, and Reformer for large context sizes. In the end, we decided to utilise the huggingface-transformers library as it is the most versatile, offering hundreds of different kinds of transformer architectures under one library.
In particular, we’re likely to expand to including DeepSpeed at some point, as it allows training models with upto a trillion parameters- or 5 times larger than GPT-3. It would serve as a perfect test case of collaborative training - however it requires ~100+ GPUs, and is on our expansion roadmap.
We have currently tested v0.2 on one V100 GPU. One of the reasons for this proposal is expanding this to a production-ready 8 GPU kubernetes cluster.
Our initial tests were performed using GPT-2; i.e. using the transformers.GPT2LMHead module. Note, the ‘LMHead’ extension allows evaluation by computing loss scores when labels are provided.
Ocean Tech Stack
Trainers publish trained model parameters as an asset on ocean, handled by Posthuman protocol. The model is stored as a data asset on the marketplace’s hardware, and allows training and inference compute calls only. In this way, the actual parameters of the model remain a secret, and the datatoken remains the sole way to access that model (even for the creator). This eliminates any possible leakage of model parameters off-chain, preserving their on-chain value.
Team members
Dhruv Mehrotra
-
Role: Core developer - Python, Solidity
-
Relevant Credentials:
- GitHub: https://github.com/dhruvluci
- LinkedIn: https://www.linkedin.com/in/dhruv-mehrotra-luci/
- Gitcoin: https://gitcoin.co/dhruvluci
-
Background/Experience:
- Co-founder/CEO, LUCI [AI information retrieval for enterprise]
- Patented first Legal AI to clear the Bar Exam [2019].
- Invented Bayesian Answer Encoding, state-of-the art in Open Domain QA in 2019.
- Multiple hackathon winner and leading weekly earner, Gitcoin.
Hetal Kenaudekar
-
Role: Core developer - Solidity, JS, Frontend
-
Background/Experience:
- Co-founder/COO, LUCI [AI information retrieval for enterprise]
- Interface design, community engagement for various DeFi teams.
- Solidity/JS/Frontend dev since early 2020, winner of multiple hackathons and grants.
Additional Information
Our startup has also received ~$25,000 in Angel funding, from the founders of two $10M+ companies.
Project Links:
Litepaper: https://drive.google.com/file/d/1zpAaU-O0jTGsAVV93Hq9mD6HpcO9K8eV/view?usp=sharing
Progress report with code overview:
Current (WIP) v0.3 Codebase - https://github.com/PosthumanMarket/Posthuman.py/tree/master_new
External Links: