SIX ECHO

Reflection of creativity

The current situation on intellectual property

With the size and growth of market on digital's contents, there is one big problem; its identity and chain of ownership, this is known as provenance

As seen on diagram below which is showing the growth of wealth in the next decade on the market, more assets mean more intellectual property infringement

Source: Deloitte Luxembourg & ArtTactic Art & Finance Report 2017

As mentioned above, we foresee infinite problems that arise. SIX Network has started the ECHO project in order to help creators of digital assets in various forms. Whether it's photos, articles, novels, musics and videos, able to bring their own digital assets to report/record ownership rights. Including being able to help those who want to publish, reproduce or distribute to verify the rights to assets that the assets acquired that is a legitimate asset without violating the rights of the creators

Our goals

  • To give digital content creators to store their works in our SIX digital asset storage chain which would be stored on the blockchain

  • With the proof of ownership and proof of existence, such works could be easily tracked from the first hash since the beginning

  • With the power of blockchain, all related parties would be collaborate and revenue sharing through agreements on smart contracts

  • To facilitate wallet-to-wallet decentralized between the creators and consumers

  • We are going to make a better future for the whole digital economy and the digital creative industries

Our solutions

  • ECHO will store asset's identity and ownership history in a secure digital token/smart contract

    • Semantic metadata

      • Assets' information such as title, author, creator, publisher, photographer, etc.

    • Digital fingerprint

      • The set of binary digits that uniquely identifies an digital asset

  • ECHO will not store contents in blockchain but semantic data and digital fingerprint

  • ECHO uses unique consensus method to synchronized asset's identity

  • ECHO creates index of creative works which will allow creators to explore their rights

  • ECHO project is an open source under the term of Apache-2.0

  • ECHO uses SIX token as a fee to store and verify creative works

ECHO overall

Our technic

  • Digital fingerprint

    • Generate contents' digest by mathematical algorithm which its result will be a set of binary arrays

  • Validators

    • By checking the duplication of;

      • Semantic metadata

      • Digest hash

  • Consensus model

    • Together with our platform's partners, SIX Storage Chain will manage and handle the consensus protocol as governance core chain

  • Semantic metadata and digital fingerprint will also be published to other public chains but not limited to, such as Klaytn, Ethereum, etc.

In practice

  • ECHO provides an SDK for a platform, like OOKBEE, to install on their environment to communicate to our SIX Storage Chain environment

    • SDK functions;

      • Semantic metadata input

      • Digital fingerprint generation

  • ECHO does not only limited to novel and article but also all related digital contents creator businesses/industries, such as music industries, production studios, advertisement houses, etc.

  • Format of digital contents on ECHO

Sound/Music

Image/Photograph

Text(Novel)

ECHO technical process

Our technic on how ECHO check and validate the duplication of IP as described here below

Image digest

First, we are going to create four digest formats

  • aHash, average hash value of the image digest

    • Reduce the size, by doing this, the high frequencies/detail of the image will be reduced and removed by shrinking the image size together with ANTIALIAS function(to still remain the detail as much as possible). Now the size of the image will be 8x8 which is 64 pixels in total

    • Reduce the colors, from RGB which is 64x64x64 to be in grayscale which will be on 64 colors in total

    • Compute the average of each pixel

      • [pixels] = numpy.asarray(image)

      • avg = pixels.mean()

      • Now we have the image's mean value in arrays

    • Construct the bits' array

      • Compare each of pixels' array by higher or lower than the average and record them into the pixels' array​

    • Construct the hash

      • By big-endian(left -> right, top -> bottom), we are going to get 64 bit integers

  • pHash, perceptive hash value of the image digest

    • Reduce the size, by doing this, the high frequencies/detail of the image will be reduced and removed by shrinking the image size together with ANTIALIAS function(to still remain the detail as much as possible). Now the size of the image will be 8x8 which is 64 pixels in total

    • Reduce the colors, from RGB which is 64x64x64 to be in grayscale which will be on 64 colors in total

    • Compute DCT; Discrete Cosine Transform, by DCT technique, the correction/modification of the color histogram and gamma will not lead the digest to be false-misses. With this technique, even we get the image which changing the gamma and histogram but the average value of the pixels will not be dramatically shifted from the original image

    • Compute the average of each pixel

      • [pixels] = numpy.asarray(image)

      • avg = pixels.mean()

      • Now we have the image's mean value in arrays​

    • Construct the bits' array

      • Compare each of pixels' array bu higher or lower than the average by putting 1 as above average and 0 as below average in each of pixels' array

      • By 0,1 value on each array, this will also lead to the ignorance of the correction/modification of the histogram and gamma​

    • Construct the hash

      • By big-endian(left -> right, top -> bottom), we are going to get 64 bit integers

  • dHash, gradient difference hash value of the image digest

    • Reduce the size, by doing this, the high frequencies/detail of the image will be reduced and removed by shrinking the image size together with ANTIALIAS function(to still remain the detail as much as possible). Now the size of the image will be 8x8 which is 64 pixels in total

    • Reduce the colors, from RGB which is 64x64x64 to be in grayscale which will be on 64 colors in total

    • Compute the gradient between adjacent pixels and record its array trend

    • Construct the hash

      • By big-endian(left -> right, top -> bottom), we are going to get 64 bit integers

  • wHash, discrete wavelet transform hash value of the image digest

    • Reduce the size, by doing this, the high frequencies/detail of the image will be reduced and removed by shrinking the image size together with ANTIALIAS function(to still remain the detail as much as possible). Now the size of the image will be 8x8 which is 64 pixels in total

    • Reduce the colors, from RGB which is 64x64x64 to be in grayscale which will be on 64 colors in total

    • Compute DWT; Discrete Wavelet Transform, by DWT technique, the image's pixels are going to be stored in [-1,1] which will be in -1,0,1 values by this values the image frequencies will be scales and shifted to be in square-shaped form

    • Compute the average of each pixel

      • [pixels] = numpy.asarray(image)

      • avg = pixels.mean()

      • Now we have the image's mean value in arrays

    • Construct the bits' array

      • Compare each of pixels' array by higher or lower than the average and record them into the pixels' array

    • Construct the hash

      • By big-endian(left -> right, top -> bottom), we are going to get 64 bit integers

Text digest

First, create 3 digest per one text content(part of Natural Language Processing;NLP) then with the Jaccard similarity index to check the duplication

  • Word tokenization, we are going to divide the text content to be a smaller parts call tokens and each of the token is coming from the comparison with the dictionary(both Thai and English)

  • Create three group of tokens by:

    • The array of group 25 tokens to represent 50% of the content

    • The array of group 14 tokens to represent 70% of the content

    • The array of group 9 tokens to represent 80% of the content

  • Generate the hash of each group to represent its group of digest

  • Those three hash together will be represented the content fingerprint

  • To check the duplication of the text content, we are using Jaccard similarity index technique, the formula of the technique is:

  • By above formula, in our case will be:

    • Count the numbers of group 25 tokens which are shared between each content

    • Count the total numbers of both contents

    • Divide the number of shared by the total number of both contents

    • Multiply by 100 will be the percentage of how similar of these two contents

      • If it is less than 50%, then we will consider that it is not the same content

      • If it is more than 50% then we will have a further check

        • Now we are going to check with the 14 tokens group

        • If it is less than 70% then we will consider that it is not the same content

        • If it is more than 70% then we will check with the last group of tokens which is group of 9 tokens

          • If the check result is less than 80% then we will consider that it is good to go

          • If the check result is more than 80% then we will reject the content as a duplication check failure

SIX Storage Chain

Now with digest information, we are going to prepare them in our SSC's smartcontract format and submit them into SSC

Once, we get the new image from ECHO's SDK which is going to record its IP into our SSC then:-

  • Verify the duplication by submit its digest information into SSC

    • SSC, SIX Storage Chain, thanks to the EOS which is allowing us to construct not from scratch our SIX Storage Chain, with the pros of the chain, we could easily build our Delegated Proof-of-Stake and also with the operations or services can be done without any associated costs but the CPUs and Rams.

    • By the SSC validator's nodes, they will check the duplication of the digest information by comparing the existing digest information on the chain by 4 hash(aHash, pHash, dHash and wHash) of the image or 3 hash for the text

    • Once the digest is good to submit, then the chosen verifier node will submit into the SSC

      • The remaining nodes will verify its correctness

ECHO on public chain

ECHO transaction ID and block number

With the information of our ECHO transaction and block number of the asset, we are now going to publish its information into the public chains but not limited to, such as Klaytn and Ethereum

By these two data on the public chain, we could check all of the digital asset information in our ECHO.WORK which is now in the selected user test, it will be public release soon(Q1, 2020)