Hyperledger Fabric Deployment with Kafka












2















I am confused about setting up the kafka nodes in production environment. I know, the Hyperledger community suggests Kafka to be setup at the joint venture of multiple organizations. As, the community prefers to keep all the Kafka and Zookeeper nodes at a single datacenter for a better performance.



But, I can't have a joint venture of all the orgs. Can Kafka, Zookeeper and Orderer nodes be distributed over multiple orgs? For an instance:



enter image description here



Suppose if there are two orgs, they can host the following:




  • Each Org has it's own MSP

  • Each Org has 4 Peers

  • Kafka cluster is shared. i.e. Org1 hosts 2 Kafka and 2 Zookeeper nodes, and Org 2 hosts, 2 kafka and 1 zookeeper node.

  • Each Org has 2 Orderer.










share|improve this question























  • You could have independent Kafka clusters in each, then share data between them using MirrorMaker, for example. You definitely should not use an even number or one zookeepers only, though

    – cricket_007
    Nov 22 '18 at 20:20













  • @cricket_007 Thanks for sharing your thoughts. But,I am more concerned about the security of the nodes. Kafka would store all the events which needs to be used to generate a Block. If the Kafka nodes would be hosted by the Orgs individually, they might change the events in any way and we won't ever get to know as Peers just validate on ReadSet and not the writeSet. I can always go to Kafka as I know all the security layer of it,use TLS certs of any orderer hosted by me and change any writeset value for any readset. The transaction will still go through as it has a valid ReadSet and stores junk

    – Nitish Bhardwaj
    Nov 23 '18 at 2:40













  • Kafka is append only, so I'm not sure what you mean by "change the events"... Plus, I have no familiarity with Hyperledger except for the documentation page about Kafka setup... If security is the issue, then sure, you can use SSL_SASL for Kafka

    – cricket_007
    Nov 23 '18 at 2:43













  • I am sorry for not being clear, I am new to kafka. I am just thinking about a scenario where an event is pushed to the queue. Somehow, we manage to get to the queue and change the event. i.e I have pushed the value as 'A' but I changed the value to 'B'. Is it possible by any means in Kafka? Or Is kafka immutable event queue where you can't change any event which is already pushed to the queue?

    – Nitish Bhardwaj
    Nov 23 '18 at 2:51











  • It is indeed immutable. Assuming you're not giving root SSH access to the brokers where the data is actually stored

    – cricket_007
    Nov 23 '18 at 2:52
















2















I am confused about setting up the kafka nodes in production environment. I know, the Hyperledger community suggests Kafka to be setup at the joint venture of multiple organizations. As, the community prefers to keep all the Kafka and Zookeeper nodes at a single datacenter for a better performance.



But, I can't have a joint venture of all the orgs. Can Kafka, Zookeeper and Orderer nodes be distributed over multiple orgs? For an instance:



enter image description here



Suppose if there are two orgs, they can host the following:




  • Each Org has it's own MSP

  • Each Org has 4 Peers

  • Kafka cluster is shared. i.e. Org1 hosts 2 Kafka and 2 Zookeeper nodes, and Org 2 hosts, 2 kafka and 1 zookeeper node.

  • Each Org has 2 Orderer.










share|improve this question























  • You could have independent Kafka clusters in each, then share data between them using MirrorMaker, for example. You definitely should not use an even number or one zookeepers only, though

    – cricket_007
    Nov 22 '18 at 20:20













  • @cricket_007 Thanks for sharing your thoughts. But,I am more concerned about the security of the nodes. Kafka would store all the events which needs to be used to generate a Block. If the Kafka nodes would be hosted by the Orgs individually, they might change the events in any way and we won't ever get to know as Peers just validate on ReadSet and not the writeSet. I can always go to Kafka as I know all the security layer of it,use TLS certs of any orderer hosted by me and change any writeset value for any readset. The transaction will still go through as it has a valid ReadSet and stores junk

    – Nitish Bhardwaj
    Nov 23 '18 at 2:40













  • Kafka is append only, so I'm not sure what you mean by "change the events"... Plus, I have no familiarity with Hyperledger except for the documentation page about Kafka setup... If security is the issue, then sure, you can use SSL_SASL for Kafka

    – cricket_007
    Nov 23 '18 at 2:43













  • I am sorry for not being clear, I am new to kafka. I am just thinking about a scenario where an event is pushed to the queue. Somehow, we manage to get to the queue and change the event. i.e I have pushed the value as 'A' but I changed the value to 'B'. Is it possible by any means in Kafka? Or Is kafka immutable event queue where you can't change any event which is already pushed to the queue?

    – Nitish Bhardwaj
    Nov 23 '18 at 2:51











  • It is indeed immutable. Assuming you're not giving root SSH access to the brokers where the data is actually stored

    – cricket_007
    Nov 23 '18 at 2:52














2












2








2


3






I am confused about setting up the kafka nodes in production environment. I know, the Hyperledger community suggests Kafka to be setup at the joint venture of multiple organizations. As, the community prefers to keep all the Kafka and Zookeeper nodes at a single datacenter for a better performance.



But, I can't have a joint venture of all the orgs. Can Kafka, Zookeeper and Orderer nodes be distributed over multiple orgs? For an instance:



enter image description here



Suppose if there are two orgs, they can host the following:




  • Each Org has it's own MSP

  • Each Org has 4 Peers

  • Kafka cluster is shared. i.e. Org1 hosts 2 Kafka and 2 Zookeeper nodes, and Org 2 hosts, 2 kafka and 1 zookeeper node.

  • Each Org has 2 Orderer.










share|improve this question














I am confused about setting up the kafka nodes in production environment. I know, the Hyperledger community suggests Kafka to be setup at the joint venture of multiple organizations. As, the community prefers to keep all the Kafka and Zookeeper nodes at a single datacenter for a better performance.



But, I can't have a joint venture of all the orgs. Can Kafka, Zookeeper and Orderer nodes be distributed over multiple orgs? For an instance:



enter image description here



Suppose if there are two orgs, they can host the following:




  • Each Org has it's own MSP

  • Each Org has 4 Peers

  • Kafka cluster is shared. i.e. Org1 hosts 2 Kafka and 2 Zookeeper nodes, and Org 2 hosts, 2 kafka and 1 zookeeper node.

  • Each Org has 2 Orderer.







apache-kafka hyperledger-fabric hyperledger






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 22 '18 at 2:54









Nitish BhardwajNitish Bhardwaj

384213




384213













  • You could have independent Kafka clusters in each, then share data between them using MirrorMaker, for example. You definitely should not use an even number or one zookeepers only, though

    – cricket_007
    Nov 22 '18 at 20:20













  • @cricket_007 Thanks for sharing your thoughts. But,I am more concerned about the security of the nodes. Kafka would store all the events which needs to be used to generate a Block. If the Kafka nodes would be hosted by the Orgs individually, they might change the events in any way and we won't ever get to know as Peers just validate on ReadSet and not the writeSet. I can always go to Kafka as I know all the security layer of it,use TLS certs of any orderer hosted by me and change any writeset value for any readset. The transaction will still go through as it has a valid ReadSet and stores junk

    – Nitish Bhardwaj
    Nov 23 '18 at 2:40













  • Kafka is append only, so I'm not sure what you mean by "change the events"... Plus, I have no familiarity with Hyperledger except for the documentation page about Kafka setup... If security is the issue, then sure, you can use SSL_SASL for Kafka

    – cricket_007
    Nov 23 '18 at 2:43













  • I am sorry for not being clear, I am new to kafka. I am just thinking about a scenario where an event is pushed to the queue. Somehow, we manage to get to the queue and change the event. i.e I have pushed the value as 'A' but I changed the value to 'B'. Is it possible by any means in Kafka? Or Is kafka immutable event queue where you can't change any event which is already pushed to the queue?

    – Nitish Bhardwaj
    Nov 23 '18 at 2:51











  • It is indeed immutable. Assuming you're not giving root SSH access to the brokers where the data is actually stored

    – cricket_007
    Nov 23 '18 at 2:52



















  • You could have independent Kafka clusters in each, then share data between them using MirrorMaker, for example. You definitely should not use an even number or one zookeepers only, though

    – cricket_007
    Nov 22 '18 at 20:20













  • @cricket_007 Thanks for sharing your thoughts. But,I am more concerned about the security of the nodes. Kafka would store all the events which needs to be used to generate a Block. If the Kafka nodes would be hosted by the Orgs individually, they might change the events in any way and we won't ever get to know as Peers just validate on ReadSet and not the writeSet. I can always go to Kafka as I know all the security layer of it,use TLS certs of any orderer hosted by me and change any writeset value for any readset. The transaction will still go through as it has a valid ReadSet and stores junk

    – Nitish Bhardwaj
    Nov 23 '18 at 2:40













  • Kafka is append only, so I'm not sure what you mean by "change the events"... Plus, I have no familiarity with Hyperledger except for the documentation page about Kafka setup... If security is the issue, then sure, you can use SSL_SASL for Kafka

    – cricket_007
    Nov 23 '18 at 2:43













  • I am sorry for not being clear, I am new to kafka. I am just thinking about a scenario where an event is pushed to the queue. Somehow, we manage to get to the queue and change the event. i.e I have pushed the value as 'A' but I changed the value to 'B'. Is it possible by any means in Kafka? Or Is kafka immutable event queue where you can't change any event which is already pushed to the queue?

    – Nitish Bhardwaj
    Nov 23 '18 at 2:51











  • It is indeed immutable. Assuming you're not giving root SSH access to the brokers where the data is actually stored

    – cricket_007
    Nov 23 '18 at 2:52

















You could have independent Kafka clusters in each, then share data between them using MirrorMaker, for example. You definitely should not use an even number or one zookeepers only, though

– cricket_007
Nov 22 '18 at 20:20







You could have independent Kafka clusters in each, then share data between them using MirrorMaker, for example. You definitely should not use an even number or one zookeepers only, though

– cricket_007
Nov 22 '18 at 20:20















@cricket_007 Thanks for sharing your thoughts. But,I am more concerned about the security of the nodes. Kafka would store all the events which needs to be used to generate a Block. If the Kafka nodes would be hosted by the Orgs individually, they might change the events in any way and we won't ever get to know as Peers just validate on ReadSet and not the writeSet. I can always go to Kafka as I know all the security layer of it,use TLS certs of any orderer hosted by me and change any writeset value for any readset. The transaction will still go through as it has a valid ReadSet and stores junk

– Nitish Bhardwaj
Nov 23 '18 at 2:40







@cricket_007 Thanks for sharing your thoughts. But,I am more concerned about the security of the nodes. Kafka would store all the events which needs to be used to generate a Block. If the Kafka nodes would be hosted by the Orgs individually, they might change the events in any way and we won't ever get to know as Peers just validate on ReadSet and not the writeSet. I can always go to Kafka as I know all the security layer of it,use TLS certs of any orderer hosted by me and change any writeset value for any readset. The transaction will still go through as it has a valid ReadSet and stores junk

– Nitish Bhardwaj
Nov 23 '18 at 2:40















Kafka is append only, so I'm not sure what you mean by "change the events"... Plus, I have no familiarity with Hyperledger except for the documentation page about Kafka setup... If security is the issue, then sure, you can use SSL_SASL for Kafka

– cricket_007
Nov 23 '18 at 2:43







Kafka is append only, so I'm not sure what you mean by "change the events"... Plus, I have no familiarity with Hyperledger except for the documentation page about Kafka setup... If security is the issue, then sure, you can use SSL_SASL for Kafka

– cricket_007
Nov 23 '18 at 2:43















I am sorry for not being clear, I am new to kafka. I am just thinking about a scenario where an event is pushed to the queue. Somehow, we manage to get to the queue and change the event. i.e I have pushed the value as 'A' but I changed the value to 'B'. Is it possible by any means in Kafka? Or Is kafka immutable event queue where you can't change any event which is already pushed to the queue?

– Nitish Bhardwaj
Nov 23 '18 at 2:51





I am sorry for not being clear, I am new to kafka. I am just thinking about a scenario where an event is pushed to the queue. Somehow, we manage to get to the queue and change the event. i.e I have pushed the value as 'A' but I changed the value to 'B'. Is it possible by any means in Kafka? Or Is kafka immutable event queue where you can't change any event which is already pushed to the queue?

– Nitish Bhardwaj
Nov 23 '18 at 2:51













It is indeed immutable. Assuming you're not giving root SSH access to the brokers where the data is actually stored

– cricket_007
Nov 23 '18 at 2:52





It is indeed immutable. Assuming you're not giving root SSH access to the brokers where the data is actually stored

– cricket_007
Nov 23 '18 at 2:52












1 Answer
1






active

oldest

votes


















2














A distributed fabric network deployment should fine tune the trade-offs between division of responsibility and trust between the organizations hosting the services. As suggested in the question, peer organizations sharing the hosting of kafka-zookeeper cluster and ordering service nodes may lead to scalability and trust issues as the size of consortium grows. Following are two major concerns:




  1. zookeepers are required to be deployed in odd numbers to avoid split
    brain problems and deployments >=7 is not recommended. If peer
    organizations take the responsibility of hosting zookeepers as well,
    the solution will not scale once the zookeeper ensemble maxes out
    and new organizations keep coming in.


  2. On the other hand, hosting OSNs within the same organization as the
    peers is discouraged as well. This is due to the default block
    validation policy delimited by fabric which allows any valid
    certificate of the ordering organization, in this case the one
    hosting peer nodes as well, to sign blocks. This essentially means that a block signed by using any valid certificate generated by an organization hosting a variety of services, including the ordering service, will pass the validation. So,




    if an organization is acting both in an ordering and application role,
    then this policy should be updated to restrict block signers to the
    subset of certificates authorized for ordering









share|improve this answer

























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53423233%2fhyperledger-fabric-deployment-with-kafka%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    2














    A distributed fabric network deployment should fine tune the trade-offs between division of responsibility and trust between the organizations hosting the services. As suggested in the question, peer organizations sharing the hosting of kafka-zookeeper cluster and ordering service nodes may lead to scalability and trust issues as the size of consortium grows. Following are two major concerns:




    1. zookeepers are required to be deployed in odd numbers to avoid split
      brain problems and deployments >=7 is not recommended. If peer
      organizations take the responsibility of hosting zookeepers as well,
      the solution will not scale once the zookeeper ensemble maxes out
      and new organizations keep coming in.


    2. On the other hand, hosting OSNs within the same organization as the
      peers is discouraged as well. This is due to the default block
      validation policy delimited by fabric which allows any valid
      certificate of the ordering organization, in this case the one
      hosting peer nodes as well, to sign blocks. This essentially means that a block signed by using any valid certificate generated by an organization hosting a variety of services, including the ordering service, will pass the validation. So,




      if an organization is acting both in an ordering and application role,
      then this policy should be updated to restrict block signers to the
      subset of certificates authorized for ordering









    share|improve this answer






























      2














      A distributed fabric network deployment should fine tune the trade-offs between division of responsibility and trust between the organizations hosting the services. As suggested in the question, peer organizations sharing the hosting of kafka-zookeeper cluster and ordering service nodes may lead to scalability and trust issues as the size of consortium grows. Following are two major concerns:




      1. zookeepers are required to be deployed in odd numbers to avoid split
        brain problems and deployments >=7 is not recommended. If peer
        organizations take the responsibility of hosting zookeepers as well,
        the solution will not scale once the zookeeper ensemble maxes out
        and new organizations keep coming in.


      2. On the other hand, hosting OSNs within the same organization as the
        peers is discouraged as well. This is due to the default block
        validation policy delimited by fabric which allows any valid
        certificate of the ordering organization, in this case the one
        hosting peer nodes as well, to sign blocks. This essentially means that a block signed by using any valid certificate generated by an organization hosting a variety of services, including the ordering service, will pass the validation. So,




        if an organization is acting both in an ordering and application role,
        then this policy should be updated to restrict block signers to the
        subset of certificates authorized for ordering









      share|improve this answer




























        2












        2








        2







        A distributed fabric network deployment should fine tune the trade-offs between division of responsibility and trust between the organizations hosting the services. As suggested in the question, peer organizations sharing the hosting of kafka-zookeeper cluster and ordering service nodes may lead to scalability and trust issues as the size of consortium grows. Following are two major concerns:




        1. zookeepers are required to be deployed in odd numbers to avoid split
          brain problems and deployments >=7 is not recommended. If peer
          organizations take the responsibility of hosting zookeepers as well,
          the solution will not scale once the zookeeper ensemble maxes out
          and new organizations keep coming in.


        2. On the other hand, hosting OSNs within the same organization as the
          peers is discouraged as well. This is due to the default block
          validation policy delimited by fabric which allows any valid
          certificate of the ordering organization, in this case the one
          hosting peer nodes as well, to sign blocks. This essentially means that a block signed by using any valid certificate generated by an organization hosting a variety of services, including the ordering service, will pass the validation. So,




          if an organization is acting both in an ordering and application role,
          then this policy should be updated to restrict block signers to the
          subset of certificates authorized for ordering









        share|improve this answer















        A distributed fabric network deployment should fine tune the trade-offs between division of responsibility and trust between the organizations hosting the services. As suggested in the question, peer organizations sharing the hosting of kafka-zookeeper cluster and ordering service nodes may lead to scalability and trust issues as the size of consortium grows. Following are two major concerns:




        1. zookeepers are required to be deployed in odd numbers to avoid split
          brain problems and deployments >=7 is not recommended. If peer
          organizations take the responsibility of hosting zookeepers as well,
          the solution will not scale once the zookeeper ensemble maxes out
          and new organizations keep coming in.


        2. On the other hand, hosting OSNs within the same organization as the
          peers is discouraged as well. This is due to the default block
          validation policy delimited by fabric which allows any valid
          certificate of the ordering organization, in this case the one
          hosting peer nodes as well, to sign blocks. This essentially means that a block signed by using any valid certificate generated by an organization hosting a variety of services, including the ordering service, will pass the validation. So,




          if an organization is acting both in an ordering and application role,
          then this policy should be updated to restrict block signers to the
          subset of certificates authorized for ordering










        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Dec 2 '18 at 7:10

























        answered Dec 2 '18 at 7:04









        msinghmsingh

        18118




        18118






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53423233%2fhyperledger-fabric-deployment-with-kafka%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            404 Error Contact Form 7 ajax form submitting

            How to know if a Active Directory user can login interactively

            TypeError: fit_transform() missing 1 required positional argument: 'X'