Java How to make sure that a shared data write affects multiple readers












-2














I am trying to code a processor intensive task, so I would like to use multithreading and share the calculation between the available processor cores.



Let's say I have thousands of iterations and all iterations have two phases:




  1. Some working threads that scans through hundreds of thousands of options
    while they have to read data from a shared array (or some other data structure), while there is no modification of the data.

  2. One thread that collects the results from all the working threads (while
    they are waiting) and makes modifications on the shared array


The phases are in sequence, so that there is no overlap (no concurrent writing and reading of the data). My problem is: How would I be sure that the data (cache) is updated for the working threads before the next phase, Phase 1, starts.



I am assuming that when people speak about cache or caching in this context, they mean the processor cache (fix me if I'm wrong).



As I understood, volatile can be used for nonreference types only, while there is no point to use synchronized, because the working threads will block each other at reading (there can be thousands of reads when processing an option).



What else can I use in this case?



Right now I have a few ideas, but I have no idea how costly they are (most probably they are):




  1. create new working threads for all iterations


  2. in a synchronized block make a copy of the array (can be up to 195kB in size) for each threads before a new iteration begins


  3. I red about ReentrantReadWriteLock, but I can't understand how is it related to caching. Can a read lock acquire force the reader's cache to update?











share|improve this question
























  • "My problem is, how to be sure the data (cache) is updated for the working threads before the next phase 1 starts." - in general you would wait for the thread(s) doing the updating to finish and then start another phase 1. For example, you might do this by using the join method. If you haven't read through the Java Tutorial on Concurrency I'd suggest starting there.
    – D.B.
    Nov 21 at 3:44












  • Thank you! Finished the tutorial, just now. Good for review but nothing really new. I know how to synchronize the threads, I just don't know how to keep memory consistency. If I make an array volatile, I assume it applies to the reference and not the data stored in the array. Means I can't know if any changes on the data stored in the array are visible for all the threads. In the case I handle the array as immutable, volatile will help, but still I have to make a copy of 195kB worth of data before all iterations. Or I misunderstood something really badly...
    – theo
    Nov 22 at 21:03


















-2














I am trying to code a processor intensive task, so I would like to use multithreading and share the calculation between the available processor cores.



Let's say I have thousands of iterations and all iterations have two phases:




  1. Some working threads that scans through hundreds of thousands of options
    while they have to read data from a shared array (or some other data structure), while there is no modification of the data.

  2. One thread that collects the results from all the working threads (while
    they are waiting) and makes modifications on the shared array


The phases are in sequence, so that there is no overlap (no concurrent writing and reading of the data). My problem is: How would I be sure that the data (cache) is updated for the working threads before the next phase, Phase 1, starts.



I am assuming that when people speak about cache or caching in this context, they mean the processor cache (fix me if I'm wrong).



As I understood, volatile can be used for nonreference types only, while there is no point to use synchronized, because the working threads will block each other at reading (there can be thousands of reads when processing an option).



What else can I use in this case?



Right now I have a few ideas, but I have no idea how costly they are (most probably they are):




  1. create new working threads for all iterations


  2. in a synchronized block make a copy of the array (can be up to 195kB in size) for each threads before a new iteration begins


  3. I red about ReentrantReadWriteLock, but I can't understand how is it related to caching. Can a read lock acquire force the reader's cache to update?











share|improve this question
























  • "My problem is, how to be sure the data (cache) is updated for the working threads before the next phase 1 starts." - in general you would wait for the thread(s) doing the updating to finish and then start another phase 1. For example, you might do this by using the join method. If you haven't read through the Java Tutorial on Concurrency I'd suggest starting there.
    – D.B.
    Nov 21 at 3:44












  • Thank you! Finished the tutorial, just now. Good for review but nothing really new. I know how to synchronize the threads, I just don't know how to keep memory consistency. If I make an array volatile, I assume it applies to the reference and not the data stored in the array. Means I can't know if any changes on the data stored in the array are visible for all the threads. In the case I handle the array as immutable, volatile will help, but still I have to make a copy of 195kB worth of data before all iterations. Or I misunderstood something really badly...
    – theo
    Nov 22 at 21:03
















-2












-2








-2







I am trying to code a processor intensive task, so I would like to use multithreading and share the calculation between the available processor cores.



Let's say I have thousands of iterations and all iterations have two phases:




  1. Some working threads that scans through hundreds of thousands of options
    while they have to read data from a shared array (or some other data structure), while there is no modification of the data.

  2. One thread that collects the results from all the working threads (while
    they are waiting) and makes modifications on the shared array


The phases are in sequence, so that there is no overlap (no concurrent writing and reading of the data). My problem is: How would I be sure that the data (cache) is updated for the working threads before the next phase, Phase 1, starts.



I am assuming that when people speak about cache or caching in this context, they mean the processor cache (fix me if I'm wrong).



As I understood, volatile can be used for nonreference types only, while there is no point to use synchronized, because the working threads will block each other at reading (there can be thousands of reads when processing an option).



What else can I use in this case?



Right now I have a few ideas, but I have no idea how costly they are (most probably they are):




  1. create new working threads for all iterations


  2. in a synchronized block make a copy of the array (can be up to 195kB in size) for each threads before a new iteration begins


  3. I red about ReentrantReadWriteLock, but I can't understand how is it related to caching. Can a read lock acquire force the reader's cache to update?











share|improve this question















I am trying to code a processor intensive task, so I would like to use multithreading and share the calculation between the available processor cores.



Let's say I have thousands of iterations and all iterations have two phases:




  1. Some working threads that scans through hundreds of thousands of options
    while they have to read data from a shared array (or some other data structure), while there is no modification of the data.

  2. One thread that collects the results from all the working threads (while
    they are waiting) and makes modifications on the shared array


The phases are in sequence, so that there is no overlap (no concurrent writing and reading of the data). My problem is: How would I be sure that the data (cache) is updated for the working threads before the next phase, Phase 1, starts.



I am assuming that when people speak about cache or caching in this context, they mean the processor cache (fix me if I'm wrong).



As I understood, volatile can be used for nonreference types only, while there is no point to use synchronized, because the working threads will block each other at reading (there can be thousands of reads when processing an option).



What else can I use in this case?



Right now I have a few ideas, but I have no idea how costly they are (most probably they are):




  1. create new working threads for all iterations


  2. in a synchronized block make a copy of the array (can be up to 195kB in size) for each threads before a new iteration begins


  3. I red about ReentrantReadWriteLock, but I can't understand how is it related to caching. Can a read lock acquire force the reader's cache to update?








java multithreading cpu-cache






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 21 at 6:47









Ishaan

8611417




8611417










asked Nov 21 at 3:06









theo

11




11












  • "My problem is, how to be sure the data (cache) is updated for the working threads before the next phase 1 starts." - in general you would wait for the thread(s) doing the updating to finish and then start another phase 1. For example, you might do this by using the join method. If you haven't read through the Java Tutorial on Concurrency I'd suggest starting there.
    – D.B.
    Nov 21 at 3:44












  • Thank you! Finished the tutorial, just now. Good for review but nothing really new. I know how to synchronize the threads, I just don't know how to keep memory consistency. If I make an array volatile, I assume it applies to the reference and not the data stored in the array. Means I can't know if any changes on the data stored in the array are visible for all the threads. In the case I handle the array as immutable, volatile will help, but still I have to make a copy of 195kB worth of data before all iterations. Or I misunderstood something really badly...
    – theo
    Nov 22 at 21:03




















  • "My problem is, how to be sure the data (cache) is updated for the working threads before the next phase 1 starts." - in general you would wait for the thread(s) doing the updating to finish and then start another phase 1. For example, you might do this by using the join method. If you haven't read through the Java Tutorial on Concurrency I'd suggest starting there.
    – D.B.
    Nov 21 at 3:44












  • Thank you! Finished the tutorial, just now. Good for review but nothing really new. I know how to synchronize the threads, I just don't know how to keep memory consistency. If I make an array volatile, I assume it applies to the reference and not the data stored in the array. Means I can't know if any changes on the data stored in the array are visible for all the threads. In the case I handle the array as immutable, volatile will help, but still I have to make a copy of 195kB worth of data before all iterations. Or I misunderstood something really badly...
    – theo
    Nov 22 at 21:03


















"My problem is, how to be sure the data (cache) is updated for the working threads before the next phase 1 starts." - in general you would wait for the thread(s) doing the updating to finish and then start another phase 1. For example, you might do this by using the join method. If you haven't read through the Java Tutorial on Concurrency I'd suggest starting there.
– D.B.
Nov 21 at 3:44






"My problem is, how to be sure the data (cache) is updated for the working threads before the next phase 1 starts." - in general you would wait for the thread(s) doing the updating to finish and then start another phase 1. For example, you might do this by using the join method. If you haven't read through the Java Tutorial on Concurrency I'd suggest starting there.
– D.B.
Nov 21 at 3:44














Thank you! Finished the tutorial, just now. Good for review but nothing really new. I know how to synchronize the threads, I just don't know how to keep memory consistency. If I make an array volatile, I assume it applies to the reference and not the data stored in the array. Means I can't know if any changes on the data stored in the array are visible for all the threads. In the case I handle the array as immutable, volatile will help, but still I have to make a copy of 195kB worth of data before all iterations. Or I misunderstood something really badly...
– theo
Nov 22 at 21:03






Thank you! Finished the tutorial, just now. Good for review but nothing really new. I know how to synchronize the threads, I just don't know how to keep memory consistency. If I make an array volatile, I assume it applies to the reference and not the data stored in the array. Means I can't know if any changes on the data stored in the array are visible for all the threads. In the case I handle the array as immutable, volatile will help, but still I have to make a copy of 195kB worth of data before all iterations. Or I misunderstood something really badly...
– theo
Nov 22 at 21:03














1 Answer
1






active

oldest

votes


















0














The thing I was searching for was mentioned in the "Java Tutorial on Concurrence" I just had to look deeper. In this case it was the AtomicIntegerArray class. Unfortunately it is not efficient enough for my needs. I run some tests, maybe it worth to share.



I approximated the cost of different memory access methods, by running them many times and averaged the elapsed times, broke everything down to one average read or write.



I used a size of 50000 integer array, and repeated every test methods 100 times, then averaged the results. The read tests are performing 50000 random(ish) reads. The results shows the approximated time of one read/write access. Still, this can't be stated as exact measurement, but I believe it gives a good sense of the time costs of the different access methods. However on different processors or with different numbers these results may be completely different regarding to the different cache sizes, and clock speeds.



So the results are:




  1. Fill time with set is: 15.922673ns

  2. Fill time with lazySet is: 4.5303152ns

  3. Atomic read time is: 9.146553ns

  4. Synchronized read time is: 57.858261399999996ns

  5. Single threaded fill time is: 0.2879112ns

  6. Single threaded read time is: 0.3152002ns

  7. Immutable copy time is: 0.2920892ns

  8. Immutable read time is: 0.650578ns


Points 1 and 2 shows the write result on an AtomicIntegerArray, with sequential writes. In some article I red about the good efficiency of the lazySet() mehtod so I wanted to test it. It is usually over perform the set() method by about 4 times, however different array sizes show different results.



Points 3 and 4 shows the difference between the "atomic" access and synchronized access (a synchronized getter) to one item of the array via random(ish) reads by four different threads simultaneously. This clearly indicates the benefits of the "atomic" access.



Since the first four value looked shockingly high, I really wanted to measure the access times without multithreading, so I got the reslults of points 5 and 6. I tried to copy and modify methods from the previous tests, to make the code as close as it is possible. Of course there can be optimizations I can't affect.



Then just out of curiosity I come up with points 7. and 8. which imitates the immutable access. Here one thread creates the array (by sequential writes) and passes it's reference to an another thread which does the random(ish) read accesses on it.



The results are heavily vary, if the parameters are changed, like the size of the array or the count of the methods running.



The conclusion:
If an algorithm is extremely memory intensive (lots of reads from the same small array, interrupted by short calculations - which is my case), multithreading can slow down the calculation instead of speeding it up. But if it has many many reads, compared to the size of the array, it may be helpful to use an immutable copy of the array, and use multiple threads.






share|improve this answer























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53404724%2fjava-how-to-make-sure-that-a-shared-data-write-affects-multiple-readers%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    The thing I was searching for was mentioned in the "Java Tutorial on Concurrence" I just had to look deeper. In this case it was the AtomicIntegerArray class. Unfortunately it is not efficient enough for my needs. I run some tests, maybe it worth to share.



    I approximated the cost of different memory access methods, by running them many times and averaged the elapsed times, broke everything down to one average read or write.



    I used a size of 50000 integer array, and repeated every test methods 100 times, then averaged the results. The read tests are performing 50000 random(ish) reads. The results shows the approximated time of one read/write access. Still, this can't be stated as exact measurement, but I believe it gives a good sense of the time costs of the different access methods. However on different processors or with different numbers these results may be completely different regarding to the different cache sizes, and clock speeds.



    So the results are:




    1. Fill time with set is: 15.922673ns

    2. Fill time with lazySet is: 4.5303152ns

    3. Atomic read time is: 9.146553ns

    4. Synchronized read time is: 57.858261399999996ns

    5. Single threaded fill time is: 0.2879112ns

    6. Single threaded read time is: 0.3152002ns

    7. Immutable copy time is: 0.2920892ns

    8. Immutable read time is: 0.650578ns


    Points 1 and 2 shows the write result on an AtomicIntegerArray, with sequential writes. In some article I red about the good efficiency of the lazySet() mehtod so I wanted to test it. It is usually over perform the set() method by about 4 times, however different array sizes show different results.



    Points 3 and 4 shows the difference between the "atomic" access and synchronized access (a synchronized getter) to one item of the array via random(ish) reads by four different threads simultaneously. This clearly indicates the benefits of the "atomic" access.



    Since the first four value looked shockingly high, I really wanted to measure the access times without multithreading, so I got the reslults of points 5 and 6. I tried to copy and modify methods from the previous tests, to make the code as close as it is possible. Of course there can be optimizations I can't affect.



    Then just out of curiosity I come up with points 7. and 8. which imitates the immutable access. Here one thread creates the array (by sequential writes) and passes it's reference to an another thread which does the random(ish) read accesses on it.



    The results are heavily vary, if the parameters are changed, like the size of the array or the count of the methods running.



    The conclusion:
    If an algorithm is extremely memory intensive (lots of reads from the same small array, interrupted by short calculations - which is my case), multithreading can slow down the calculation instead of speeding it up. But if it has many many reads, compared to the size of the array, it may be helpful to use an immutable copy of the array, and use multiple threads.






    share|improve this answer




























      0














      The thing I was searching for was mentioned in the "Java Tutorial on Concurrence" I just had to look deeper. In this case it was the AtomicIntegerArray class. Unfortunately it is not efficient enough for my needs. I run some tests, maybe it worth to share.



      I approximated the cost of different memory access methods, by running them many times and averaged the elapsed times, broke everything down to one average read or write.



      I used a size of 50000 integer array, and repeated every test methods 100 times, then averaged the results. The read tests are performing 50000 random(ish) reads. The results shows the approximated time of one read/write access. Still, this can't be stated as exact measurement, but I believe it gives a good sense of the time costs of the different access methods. However on different processors or with different numbers these results may be completely different regarding to the different cache sizes, and clock speeds.



      So the results are:




      1. Fill time with set is: 15.922673ns

      2. Fill time with lazySet is: 4.5303152ns

      3. Atomic read time is: 9.146553ns

      4. Synchronized read time is: 57.858261399999996ns

      5. Single threaded fill time is: 0.2879112ns

      6. Single threaded read time is: 0.3152002ns

      7. Immutable copy time is: 0.2920892ns

      8. Immutable read time is: 0.650578ns


      Points 1 and 2 shows the write result on an AtomicIntegerArray, with sequential writes. In some article I red about the good efficiency of the lazySet() mehtod so I wanted to test it. It is usually over perform the set() method by about 4 times, however different array sizes show different results.



      Points 3 and 4 shows the difference between the "atomic" access and synchronized access (a synchronized getter) to one item of the array via random(ish) reads by four different threads simultaneously. This clearly indicates the benefits of the "atomic" access.



      Since the first four value looked shockingly high, I really wanted to measure the access times without multithreading, so I got the reslults of points 5 and 6. I tried to copy and modify methods from the previous tests, to make the code as close as it is possible. Of course there can be optimizations I can't affect.



      Then just out of curiosity I come up with points 7. and 8. which imitates the immutable access. Here one thread creates the array (by sequential writes) and passes it's reference to an another thread which does the random(ish) read accesses on it.



      The results are heavily vary, if the parameters are changed, like the size of the array or the count of the methods running.



      The conclusion:
      If an algorithm is extremely memory intensive (lots of reads from the same small array, interrupted by short calculations - which is my case), multithreading can slow down the calculation instead of speeding it up. But if it has many many reads, compared to the size of the array, it may be helpful to use an immutable copy of the array, and use multiple threads.






      share|improve this answer


























        0












        0








        0






        The thing I was searching for was mentioned in the "Java Tutorial on Concurrence" I just had to look deeper. In this case it was the AtomicIntegerArray class. Unfortunately it is not efficient enough for my needs. I run some tests, maybe it worth to share.



        I approximated the cost of different memory access methods, by running them many times and averaged the elapsed times, broke everything down to one average read or write.



        I used a size of 50000 integer array, and repeated every test methods 100 times, then averaged the results. The read tests are performing 50000 random(ish) reads. The results shows the approximated time of one read/write access. Still, this can't be stated as exact measurement, but I believe it gives a good sense of the time costs of the different access methods. However on different processors or with different numbers these results may be completely different regarding to the different cache sizes, and clock speeds.



        So the results are:




        1. Fill time with set is: 15.922673ns

        2. Fill time with lazySet is: 4.5303152ns

        3. Atomic read time is: 9.146553ns

        4. Synchronized read time is: 57.858261399999996ns

        5. Single threaded fill time is: 0.2879112ns

        6. Single threaded read time is: 0.3152002ns

        7. Immutable copy time is: 0.2920892ns

        8. Immutable read time is: 0.650578ns


        Points 1 and 2 shows the write result on an AtomicIntegerArray, with sequential writes. In some article I red about the good efficiency of the lazySet() mehtod so I wanted to test it. It is usually over perform the set() method by about 4 times, however different array sizes show different results.



        Points 3 and 4 shows the difference between the "atomic" access and synchronized access (a synchronized getter) to one item of the array via random(ish) reads by four different threads simultaneously. This clearly indicates the benefits of the "atomic" access.



        Since the first four value looked shockingly high, I really wanted to measure the access times without multithreading, so I got the reslults of points 5 and 6. I tried to copy and modify methods from the previous tests, to make the code as close as it is possible. Of course there can be optimizations I can't affect.



        Then just out of curiosity I come up with points 7. and 8. which imitates the immutable access. Here one thread creates the array (by sequential writes) and passes it's reference to an another thread which does the random(ish) read accesses on it.



        The results are heavily vary, if the parameters are changed, like the size of the array or the count of the methods running.



        The conclusion:
        If an algorithm is extremely memory intensive (lots of reads from the same small array, interrupted by short calculations - which is my case), multithreading can slow down the calculation instead of speeding it up. But if it has many many reads, compared to the size of the array, it may be helpful to use an immutable copy of the array, and use multiple threads.






        share|improve this answer














        The thing I was searching for was mentioned in the "Java Tutorial on Concurrence" I just had to look deeper. In this case it was the AtomicIntegerArray class. Unfortunately it is not efficient enough for my needs. I run some tests, maybe it worth to share.



        I approximated the cost of different memory access methods, by running them many times and averaged the elapsed times, broke everything down to one average read or write.



        I used a size of 50000 integer array, and repeated every test methods 100 times, then averaged the results. The read tests are performing 50000 random(ish) reads. The results shows the approximated time of one read/write access. Still, this can't be stated as exact measurement, but I believe it gives a good sense of the time costs of the different access methods. However on different processors or with different numbers these results may be completely different regarding to the different cache sizes, and clock speeds.



        So the results are:




        1. Fill time with set is: 15.922673ns

        2. Fill time with lazySet is: 4.5303152ns

        3. Atomic read time is: 9.146553ns

        4. Synchronized read time is: 57.858261399999996ns

        5. Single threaded fill time is: 0.2879112ns

        6. Single threaded read time is: 0.3152002ns

        7. Immutable copy time is: 0.2920892ns

        8. Immutable read time is: 0.650578ns


        Points 1 and 2 shows the write result on an AtomicIntegerArray, with sequential writes. In some article I red about the good efficiency of the lazySet() mehtod so I wanted to test it. It is usually over perform the set() method by about 4 times, however different array sizes show different results.



        Points 3 and 4 shows the difference between the "atomic" access and synchronized access (a synchronized getter) to one item of the array via random(ish) reads by four different threads simultaneously. This clearly indicates the benefits of the "atomic" access.



        Since the first four value looked shockingly high, I really wanted to measure the access times without multithreading, so I got the reslults of points 5 and 6. I tried to copy and modify methods from the previous tests, to make the code as close as it is possible. Of course there can be optimizations I can't affect.



        Then just out of curiosity I come up with points 7. and 8. which imitates the immutable access. Here one thread creates the array (by sequential writes) and passes it's reference to an another thread which does the random(ish) read accesses on it.



        The results are heavily vary, if the parameters are changed, like the size of the array or the count of the methods running.



        The conclusion:
        If an algorithm is extremely memory intensive (lots of reads from the same small array, interrupted by short calculations - which is my case), multithreading can slow down the calculation instead of speeding it up. But if it has many many reads, compared to the size of the array, it may be helpful to use an immutable copy of the array, and use multiple threads.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Nov 24 at 15:51

























        answered Nov 24 at 14:40









        theo

        11




        11






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53404724%2fjava-how-to-make-sure-that-a-shared-data-write-affects-multiple-readers%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            404 Error Contact Form 7 ajax form submitting

            How to know if a Active Directory user can login interactively

            TypeError: fit_transform() missing 1 required positional argument: 'X'