Posthuman AI: Ocean DAO Round 9

  • Name of project: Posthuman AI
  • Project Website:

Github : https://github.com/PosthumanMarket/Posthuman.py/tree/1.2

Twitter : https://twitter.com/PosthumanNetwo1

  • Proposal Wallet Address:

0x21e06646433954aabace8e3d93d502e423249299

  • The proposal in one sentence:

Posthuman allows training and inference of advanced NLP models without viewing model parameters (i.e. ZK-training and inference), using Compute to Data.

  • Which category best describes your project? Pick one or more.
  • Build / improve applications or integrations to Ocean

Grant Amount Requested: 11000 USD

Project Overview

  • Description of the project:

Large transformer models have major commercial applications in audio, video, and text based AI. Due to the high cost of training and inference, it is not possible for most developers to utilise their own models, and thus they rely on centralised API access- which can be revoked at any time and comes with price and privacy concerns.

Posthuman tackles the following Problems:

  1. Ownership of Model Parameters of large transformer models is a crucial issue for developers that build on these models. If the API access can be unilaterally revoked or repriced, it makes for a very weak foundation for AI-based businesses.
  2. Next, there is a question of verifiability of claimed loss scores: it is nearly impossible to verify if a particular centralised API request was actually served by the model promised, and not a smaller, cheaper model.
  3. Further, private ownership of models gives rise to a culture of closed competition rather than open collaboration: every improvement on the model requires express permission, and the use of a model so improved is also entirely permissioned.

The Solution:

Posthuman is a Marketplace based on Ocean protocol that allows users to buy compute services on large NLP models. Model Providers contribute funds to train useful models, and Model Consumers purchase inference and evaluation on the models they find most useful. Users can now train, infer, and evaluate on any arbitary text data.

Posthuman’s decentralised architecture achieves three goals that are impossible with centralised AI providers:

  • Verifiable Training and Inference: The end user can know for sure which model served a particular inference request
  • Zero-Knowledge training & ownership: The marketplace controls the models, ensuring each person who contributed to training is rewarded fairly, as all value created by these models remains on-chain and cannot be ‘leaked’.
  • Censorship-Resistant Access : Access to AI is fast becoming a basic necessity for a productive life, however such access can easily be censored by centralised providers. With a decentralised alternative, any holder of crypto is guaranteed to be treated equally by the protocol.

Value for Ocean

Ocean protocol will form the backbone of zero-knowledge model publication on the Posthuman Marketplace. Additionally, all inference requests will be on the ocean network due to the decentralised and zero-knowledge nature of the model- It will not be possible for an individual to run inference on a published model outside of the ocean ecosystem.

Ocean’s Value for Project

Ocean Protocol provides a market for dataset assets, compute and algorithms. Specifically, data-providers can expose their data for ‘compute-eyes only’, ensuring no data leaks. Here we apply this principle to share trained parameter values to further ‘ compute eyes ’ only for inference and fine-tuning, preserving the secrecy of model parameters and allowing repeated rewards to flow to those who participated in training it.

  • What is the final product (e.g. App, URL, Medium, etc)?

Posthuman Market will be a webapp, that serves various AI models as C2D data. Posthuman v1 will include NLP models, including all state-of-the art transformer models developed since the advent of BERT.

Posthuman tools will also be accessible via API, enabling app developers to directly integrate Posthuman inference in their AI applications.

  • How does this project drive value to the Ocean ecosystem? This is best expressed as Expected ROI, details here .

new - v1.2 calculation

After successful publication on the mainnet, we’re in a position to make a more practical and short run evaluation of ROI, in addition to the broad calculations presented before.

We already have 1 proprietary AI model ready for publication on Ocean, and at least 5 more such models in the pipeline, tailored to various enterprise use cases (i.e. finding information in text, accounting documentation, healthcare documentation, images, customer support bots etc.).

We confidently expect each such model to create at least $250,000 - $1,000,000 in additional OCEAN DATA Token consumption volume over the next 1 year. Taking these lower estimate, we arrive at the following conservative calculation for the next 1 year:

Total Funding (including this round): $50000 + future rounds
Expected increase in OCEAN Network demand over next 1 year, from 5 high value AI models: $1,250,000-$5,000,000
Expected chance of success: 90% (as our v1 prototype was successful we have upgraded our confidence in our success probability)
Expected ROI : 22.5 - 90

Grant Deliverables:
[] Publish 2 Commercially Useful AI models on Polygon Mainnet Ocean Marketplace, after resolving all bugs.
[] Market these models across multiple channels, including by offering bounties
[] Continue the development of Posthuman AI Market (fork of Ocean Market), including further work on deploying our own kubernetes cluster on A100 GPUs.

Team members

Dhruv Mehrotra

Hetal Kenaudekar

Additional Information

After failing to raise funds in R7, we sought additional streams of passive income to support our project. We have managed to secure $8k/month in angel funding, and thus require a lower level of support (11k) from Ocean DAO. We hope this will help us sustainably fund our long-term development objectives, with the support of the Ocean community. Our startup has also received ~$50,000 in Angel funding, from the founders of two $10M+ companies.

Project Links:

Litepaper: https://drive.google.com/file/d/1zpAaU-O0jTGsAVV93Hq9mD6HpcO9K8eV/view?usp=sharing

3 Likes

We’ve completed the deliverables specified in this grant:

[x] Towards this, we trained two models -
Model 1: AI Assistant as a service - A custom gpt2 model trained on conversational data, this model can be used to build and run conversational AI chatbots across fields, AI based games like adventure, etc. Published on Ocean Market. [Model: https://market.oceanprotocol.com/asset/did:op:4Ff0c8049458C19E08e125D6536af8716be5Ffa8
Algorithm: https://market.oceanprotocol.com/asset/did:op:16915E68A8b427321c2117Cd4B4b80d280962027 ]

Model 2: Wikipedia QA as a service - A custom roberta model + retriever pipeline trained on Open-Domain wikipedia question answering. This model can answer any question from the entirety of wikipedia text. This can be used for research across fields, such as medical, historical, academic & scientific research. [publishing this week]

[x] Prepared bounty descriptions for Hackathon where developers build apps using our AI models.

[x] Switched file hosting to Filecoin, and added an encryption library to maximise model security - towards building our own market. More - https://docs.google.com/document/d/1myCa3YGt-kVQzEndNVUtmlgPwXbW25oHDcjQ4OOVeSg/edit#