Understanding the difference between these two scaling properties












2















I need help understanding the following paragraph from a book on algorithms -




Search spaces for natural combinatorial problems tend to grow
exponentially in the size N of the input; if the input size increases
by one, the number of possibilities increases multiplicatively. We’d
like a good algorithm for such a problem to have a better scaling
property: when the input size increases by a constant factor—say, a
factor of 2—the algorithm should only slow down by some constant
factor C.




I don't really get why one is better than the other. If anyone can formulate any examples to aid my understanding, its greatly appreciated.










share|improve this question























  • Are you familiar with the idea of time complexity?

    – Mitchel Paulin
    Nov 25 '18 at 22:48











  • I'm voting to close this question as off-topic because it is not directly related to a programming problem. It may be a better fit over at cs.stackexchange.com

    – pstrjds
    Nov 25 '18 at 22:55











  • Related / duplicate: What is a plain English explanation of "Big O" notation? - there are lots of examples and explanations.

    – Dukeling
    Nov 25 '18 at 23:06


















2















I need help understanding the following paragraph from a book on algorithms -




Search spaces for natural combinatorial problems tend to grow
exponentially in the size N of the input; if the input size increases
by one, the number of possibilities increases multiplicatively. We’d
like a good algorithm for such a problem to have a better scaling
property: when the input size increases by a constant factor—say, a
factor of 2—the algorithm should only slow down by some constant
factor C.




I don't really get why one is better than the other. If anyone can formulate any examples to aid my understanding, its greatly appreciated.










share|improve this question























  • Are you familiar with the idea of time complexity?

    – Mitchel Paulin
    Nov 25 '18 at 22:48











  • I'm voting to close this question as off-topic because it is not directly related to a programming problem. It may be a better fit over at cs.stackexchange.com

    – pstrjds
    Nov 25 '18 at 22:55











  • Related / duplicate: What is a plain English explanation of "Big O" notation? - there are lots of examples and explanations.

    – Dukeling
    Nov 25 '18 at 23:06
















2












2








2








I need help understanding the following paragraph from a book on algorithms -




Search spaces for natural combinatorial problems tend to grow
exponentially in the size N of the input; if the input size increases
by one, the number of possibilities increases multiplicatively. We’d
like a good algorithm for such a problem to have a better scaling
property: when the input size increases by a constant factor—say, a
factor of 2—the algorithm should only slow down by some constant
factor C.




I don't really get why one is better than the other. If anyone can formulate any examples to aid my understanding, its greatly appreciated.










share|improve this question














I need help understanding the following paragraph from a book on algorithms -




Search spaces for natural combinatorial problems tend to grow
exponentially in the size N of the input; if the input size increases
by one, the number of possibilities increases multiplicatively. We’d
like a good algorithm for such a problem to have a better scaling
property: when the input size increases by a constant factor—say, a
factor of 2—the algorithm should only slow down by some constant
factor C.




I don't really get why one is better than the other. If anyone can formulate any examples to aid my understanding, its greatly appreciated.







algorithm performance big-o scaling






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 25 '18 at 22:41









RonanRonan

383




383













  • Are you familiar with the idea of time complexity?

    – Mitchel Paulin
    Nov 25 '18 at 22:48











  • I'm voting to close this question as off-topic because it is not directly related to a programming problem. It may be a better fit over at cs.stackexchange.com

    – pstrjds
    Nov 25 '18 at 22:55











  • Related / duplicate: What is a plain English explanation of "Big O" notation? - there are lots of examples and explanations.

    – Dukeling
    Nov 25 '18 at 23:06





















  • Are you familiar with the idea of time complexity?

    – Mitchel Paulin
    Nov 25 '18 at 22:48











  • I'm voting to close this question as off-topic because it is not directly related to a programming problem. It may be a better fit over at cs.stackexchange.com

    – pstrjds
    Nov 25 '18 at 22:55











  • Related / duplicate: What is a plain English explanation of "Big O" notation? - there are lots of examples and explanations.

    – Dukeling
    Nov 25 '18 at 23:06



















Are you familiar with the idea of time complexity?

– Mitchel Paulin
Nov 25 '18 at 22:48





Are you familiar with the idea of time complexity?

– Mitchel Paulin
Nov 25 '18 at 22:48













I'm voting to close this question as off-topic because it is not directly related to a programming problem. It may be a better fit over at cs.stackexchange.com

– pstrjds
Nov 25 '18 at 22:55





I'm voting to close this question as off-topic because it is not directly related to a programming problem. It may be a better fit over at cs.stackexchange.com

– pstrjds
Nov 25 '18 at 22:55













Related / duplicate: What is a plain English explanation of "Big O" notation? - there are lots of examples and explanations.

– Dukeling
Nov 25 '18 at 23:06







Related / duplicate: What is a plain English explanation of "Big O" notation? - there are lots of examples and explanations.

– Dukeling
Nov 25 '18 at 23:06














1 Answer
1






active

oldest

votes


















0














Let's consider the following problem: you're given a list of numbers, and you want to find the longest subsequence of that list where the numbers are in ascending order. For example, given the sequence



2  7  1  8  3  9  4  5  0  6


you could form the subsequence [2, 7, 8, 9] as follows:



2  7  1  8  3  9  4  5  0  6
^ ^ ^ ^


but there's an even longer one, [1, 3, 4, 5, 6] available here:



2  7  1  8  3  9  4  5  0  6
^ ^ ^ ^ ^


That one happens to be the longest subsequence that's in increasing order, I believe, though please let me know if I'm mistaken.



Now that we have this problem, how would we go about solving it in the general case where you have a list of n numbers? Let's start with a not so great option. One possibility would be to list off all the subsequences of the original list of numbers, then filter out everything that isn't in increasing order, and then to take the longest one out of all the ones we find. For example, given this short list:



2  7  1  8


we'd form all the possible subsequences, which are shown here:





  • [8]

  • [1]

  • [1, 8]

  • [7]

  • [7, 8]

  • [7, 1]

  • [7, 1, 8]

  • [2]

  • [2, 8]

  • [2, 1]

  • [2, 1, 8]

  • [2, 7]

  • [2, 7, 8]

  • [2, 7, 1]

  • [2, 7, 1, 8]


Yikes, that list is pretty long. But by looking at it, we can see that the longest increasing subsequences have length two, and that there are plenty of choices for which one we could pick.



Now, how well is this going to scale as our input list gets longer and longer? Here's something to think about - how many subsequences are there of this new list, which I made by adding 3 to the end of the existing list?



2  7  1  8  3


Well, every existing subsequence is still a perfectly valid subsequence here. But on top of that, we can form a bunch of new subsequences. In fact, we could take any existing subsequence and then tack a 3 onto the end of it. That means that if we had S subsequences for our length-four list, we'll have 2S subsequences for our length-five list.



More generally, you can see that if you take a list and add one more element onto the end of it, you'll double the number of subsequences available. That's a mathematical fact, and it's neither good nor bad by itself, but if we're in the business of listing all those subsequences and checking each one of them to see whether it has some property, we're going to be in trouble because that means there's going to be a ton of subsequences. We already see that there are 16 subsequences of a four-element list. That means there's 32 subsequences of a five-element list, 64 subsequences of a six-element list, and, more generally, 2n subsequences of an n-element list.



With that insight, let's make a quick calculation. How many subsequences are we going to have to check if we have, say, a 300-element list? We'd have to potentially check 2300 of them - a number that's bigger than the number of atoms in the observable universe! Oops. That's going to take way more time than we have.



On the other hand, there's a beautiful algorithm called patience sorting that will always find the longest increasing subsequence, and which does so quite easily. You can do this by playing a little game. You'll place each of the items in the list into one of many piles. To determine what pile to pick, look for the first pile whose top number is bigger than the number in question and place it on top. If you can't find a pile this way, put the number into its own pile on the far right.



For example, given this original list:



2  7  1  8  3  9  4  5  0  6


after playing the game we'd end up with these piles:



0
1 3 4 5
2 7 8 9 6


And here's an amazing fact: the number of piles used equals the length of the longest increasing subsequence. Moreover, you can find that subsequence in the following way: every time you place a number on top of a pile, make a note of the number that was on top of the pile to its left. If we do this with the above numbers, here's what we'll find; the parenthesized number tells us what was on top of the stack to the left at the time we put the number down:



0
1 3 (1) 4 (3) 5 (4)
2 7 (2) 8 (7) 9 (8) 6 (5)


To find the subsequence we want, start with the top of the leftmost pile. Write that number down, then find the number in parentheses and repeat this process. Doing that here gives us 6, 5, 4, 3, 1, which, if reversed, is 1, 3, 4, 5, 6, the longest increasing subsequence! (Wow!) You can prove that this works in all cases, and it's a really beautiful exercise to actually go and do this.



So now the question is how fast this process is. Placing the first number down takes one unit of work - just place it in its own pile. Placing the second number down takes at most two units of work - we have to look at the top of the first pile, and optionally put the number into a second pile. Placing the third number takes at most three units of work - we have to look at up to two piles, and possibly place the number into its own third pile. More generally, placing the kth number down takes k units of work. Overall, this means that the work we're doing is roughly




1 + 2 + 3 + ... + n




if we have n total elements. That's a famous sum called Gauss's sum, and it simplifies to approximately n2 / 2. So we can say that we'll need to do roughly n2 / 2 units of work to solve things this way.



How does that compare to our 2n solution from before? Well, unlike 2n, which grows stupidly fast as a function of n, n2 / 2 is actually a pretty nice function. If we plug in n = 300, which previously in 2n land gave back "the number of atoms in the universe," we get back a more modest 45,000. If that's a number of nanoseconds, that's nothing; that'll take a computer under a second to do. In fact, you have to plug in a pretty big value of n before you're looking at something that's going to take the computer quite a while to complete.



The function n2 / 2 has an interesting property compared with 2n. With 2n, if you increase n by one, as we saw earlier, 2n will double. On the other hand, if you take n2 / 2 and increase n by one, then n2 / 2 will get bigger, but not by much (specifically, by n + 1/2).



By contrast, if you take 2n and then double n, then 2nsquares in size - yikes! But if you take n2 / 2 and double n, then n2 / 2 goes up only by a factor of four - not that bad, actually, given that we doubled our input size!



This gets at the heart of what the quote you mentioned is talking about. Algorithms with runtimes like 2n, n!, etc. scale terribly as a function of n, since increasing n by one causes a huge jump in the runtime. On the other hand, functions like n, n log n, n2, etc. have the property that if you double n, the runtime only goes up by some constant term. They therefore scale much more nicely as a function of input.






share|improve this answer























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53472733%2funderstanding-the-difference-between-these-two-scaling-properties%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    Let's consider the following problem: you're given a list of numbers, and you want to find the longest subsequence of that list where the numbers are in ascending order. For example, given the sequence



    2  7  1  8  3  9  4  5  0  6


    you could form the subsequence [2, 7, 8, 9] as follows:



    2  7  1  8  3  9  4  5  0  6
    ^ ^ ^ ^


    but there's an even longer one, [1, 3, 4, 5, 6] available here:



    2  7  1  8  3  9  4  5  0  6
    ^ ^ ^ ^ ^


    That one happens to be the longest subsequence that's in increasing order, I believe, though please let me know if I'm mistaken.



    Now that we have this problem, how would we go about solving it in the general case where you have a list of n numbers? Let's start with a not so great option. One possibility would be to list off all the subsequences of the original list of numbers, then filter out everything that isn't in increasing order, and then to take the longest one out of all the ones we find. For example, given this short list:



    2  7  1  8


    we'd form all the possible subsequences, which are shown here:





    • [8]

    • [1]

    • [1, 8]

    • [7]

    • [7, 8]

    • [7, 1]

    • [7, 1, 8]

    • [2]

    • [2, 8]

    • [2, 1]

    • [2, 1, 8]

    • [2, 7]

    • [2, 7, 8]

    • [2, 7, 1]

    • [2, 7, 1, 8]


    Yikes, that list is pretty long. But by looking at it, we can see that the longest increasing subsequences have length two, and that there are plenty of choices for which one we could pick.



    Now, how well is this going to scale as our input list gets longer and longer? Here's something to think about - how many subsequences are there of this new list, which I made by adding 3 to the end of the existing list?



    2  7  1  8  3


    Well, every existing subsequence is still a perfectly valid subsequence here. But on top of that, we can form a bunch of new subsequences. In fact, we could take any existing subsequence and then tack a 3 onto the end of it. That means that if we had S subsequences for our length-four list, we'll have 2S subsequences for our length-five list.



    More generally, you can see that if you take a list and add one more element onto the end of it, you'll double the number of subsequences available. That's a mathematical fact, and it's neither good nor bad by itself, but if we're in the business of listing all those subsequences and checking each one of them to see whether it has some property, we're going to be in trouble because that means there's going to be a ton of subsequences. We already see that there are 16 subsequences of a four-element list. That means there's 32 subsequences of a five-element list, 64 subsequences of a six-element list, and, more generally, 2n subsequences of an n-element list.



    With that insight, let's make a quick calculation. How many subsequences are we going to have to check if we have, say, a 300-element list? We'd have to potentially check 2300 of them - a number that's bigger than the number of atoms in the observable universe! Oops. That's going to take way more time than we have.



    On the other hand, there's a beautiful algorithm called patience sorting that will always find the longest increasing subsequence, and which does so quite easily. You can do this by playing a little game. You'll place each of the items in the list into one of many piles. To determine what pile to pick, look for the first pile whose top number is bigger than the number in question and place it on top. If you can't find a pile this way, put the number into its own pile on the far right.



    For example, given this original list:



    2  7  1  8  3  9  4  5  0  6


    after playing the game we'd end up with these piles:



    0
    1 3 4 5
    2 7 8 9 6


    And here's an amazing fact: the number of piles used equals the length of the longest increasing subsequence. Moreover, you can find that subsequence in the following way: every time you place a number on top of a pile, make a note of the number that was on top of the pile to its left. If we do this with the above numbers, here's what we'll find; the parenthesized number tells us what was on top of the stack to the left at the time we put the number down:



    0
    1 3 (1) 4 (3) 5 (4)
    2 7 (2) 8 (7) 9 (8) 6 (5)


    To find the subsequence we want, start with the top of the leftmost pile. Write that number down, then find the number in parentheses and repeat this process. Doing that here gives us 6, 5, 4, 3, 1, which, if reversed, is 1, 3, 4, 5, 6, the longest increasing subsequence! (Wow!) You can prove that this works in all cases, and it's a really beautiful exercise to actually go and do this.



    So now the question is how fast this process is. Placing the first number down takes one unit of work - just place it in its own pile. Placing the second number down takes at most two units of work - we have to look at the top of the first pile, and optionally put the number into a second pile. Placing the third number takes at most three units of work - we have to look at up to two piles, and possibly place the number into its own third pile. More generally, placing the kth number down takes k units of work. Overall, this means that the work we're doing is roughly




    1 + 2 + 3 + ... + n




    if we have n total elements. That's a famous sum called Gauss's sum, and it simplifies to approximately n2 / 2. So we can say that we'll need to do roughly n2 / 2 units of work to solve things this way.



    How does that compare to our 2n solution from before? Well, unlike 2n, which grows stupidly fast as a function of n, n2 / 2 is actually a pretty nice function. If we plug in n = 300, which previously in 2n land gave back "the number of atoms in the universe," we get back a more modest 45,000. If that's a number of nanoseconds, that's nothing; that'll take a computer under a second to do. In fact, you have to plug in a pretty big value of n before you're looking at something that's going to take the computer quite a while to complete.



    The function n2 / 2 has an interesting property compared with 2n. With 2n, if you increase n by one, as we saw earlier, 2n will double. On the other hand, if you take n2 / 2 and increase n by one, then n2 / 2 will get bigger, but not by much (specifically, by n + 1/2).



    By contrast, if you take 2n and then double n, then 2nsquares in size - yikes! But if you take n2 / 2 and double n, then n2 / 2 goes up only by a factor of four - not that bad, actually, given that we doubled our input size!



    This gets at the heart of what the quote you mentioned is talking about. Algorithms with runtimes like 2n, n!, etc. scale terribly as a function of n, since increasing n by one causes a huge jump in the runtime. On the other hand, functions like n, n log n, n2, etc. have the property that if you double n, the runtime only goes up by some constant term. They therefore scale much more nicely as a function of input.






    share|improve this answer




























      0














      Let's consider the following problem: you're given a list of numbers, and you want to find the longest subsequence of that list where the numbers are in ascending order. For example, given the sequence



      2  7  1  8  3  9  4  5  0  6


      you could form the subsequence [2, 7, 8, 9] as follows:



      2  7  1  8  3  9  4  5  0  6
      ^ ^ ^ ^


      but there's an even longer one, [1, 3, 4, 5, 6] available here:



      2  7  1  8  3  9  4  5  0  6
      ^ ^ ^ ^ ^


      That one happens to be the longest subsequence that's in increasing order, I believe, though please let me know if I'm mistaken.



      Now that we have this problem, how would we go about solving it in the general case where you have a list of n numbers? Let's start with a not so great option. One possibility would be to list off all the subsequences of the original list of numbers, then filter out everything that isn't in increasing order, and then to take the longest one out of all the ones we find. For example, given this short list:



      2  7  1  8


      we'd form all the possible subsequences, which are shown here:





      • [8]

      • [1]

      • [1, 8]

      • [7]

      • [7, 8]

      • [7, 1]

      • [7, 1, 8]

      • [2]

      • [2, 8]

      • [2, 1]

      • [2, 1, 8]

      • [2, 7]

      • [2, 7, 8]

      • [2, 7, 1]

      • [2, 7, 1, 8]


      Yikes, that list is pretty long. But by looking at it, we can see that the longest increasing subsequences have length two, and that there are plenty of choices for which one we could pick.



      Now, how well is this going to scale as our input list gets longer and longer? Here's something to think about - how many subsequences are there of this new list, which I made by adding 3 to the end of the existing list?



      2  7  1  8  3


      Well, every existing subsequence is still a perfectly valid subsequence here. But on top of that, we can form a bunch of new subsequences. In fact, we could take any existing subsequence and then tack a 3 onto the end of it. That means that if we had S subsequences for our length-four list, we'll have 2S subsequences for our length-five list.



      More generally, you can see that if you take a list and add one more element onto the end of it, you'll double the number of subsequences available. That's a mathematical fact, and it's neither good nor bad by itself, but if we're in the business of listing all those subsequences and checking each one of them to see whether it has some property, we're going to be in trouble because that means there's going to be a ton of subsequences. We already see that there are 16 subsequences of a four-element list. That means there's 32 subsequences of a five-element list, 64 subsequences of a six-element list, and, more generally, 2n subsequences of an n-element list.



      With that insight, let's make a quick calculation. How many subsequences are we going to have to check if we have, say, a 300-element list? We'd have to potentially check 2300 of them - a number that's bigger than the number of atoms in the observable universe! Oops. That's going to take way more time than we have.



      On the other hand, there's a beautiful algorithm called patience sorting that will always find the longest increasing subsequence, and which does so quite easily. You can do this by playing a little game. You'll place each of the items in the list into one of many piles. To determine what pile to pick, look for the first pile whose top number is bigger than the number in question and place it on top. If you can't find a pile this way, put the number into its own pile on the far right.



      For example, given this original list:



      2  7  1  8  3  9  4  5  0  6


      after playing the game we'd end up with these piles:



      0
      1 3 4 5
      2 7 8 9 6


      And here's an amazing fact: the number of piles used equals the length of the longest increasing subsequence. Moreover, you can find that subsequence in the following way: every time you place a number on top of a pile, make a note of the number that was on top of the pile to its left. If we do this with the above numbers, here's what we'll find; the parenthesized number tells us what was on top of the stack to the left at the time we put the number down:



      0
      1 3 (1) 4 (3) 5 (4)
      2 7 (2) 8 (7) 9 (8) 6 (5)


      To find the subsequence we want, start with the top of the leftmost pile. Write that number down, then find the number in parentheses and repeat this process. Doing that here gives us 6, 5, 4, 3, 1, which, if reversed, is 1, 3, 4, 5, 6, the longest increasing subsequence! (Wow!) You can prove that this works in all cases, and it's a really beautiful exercise to actually go and do this.



      So now the question is how fast this process is. Placing the first number down takes one unit of work - just place it in its own pile. Placing the second number down takes at most two units of work - we have to look at the top of the first pile, and optionally put the number into a second pile. Placing the third number takes at most three units of work - we have to look at up to two piles, and possibly place the number into its own third pile. More generally, placing the kth number down takes k units of work. Overall, this means that the work we're doing is roughly




      1 + 2 + 3 + ... + n




      if we have n total elements. That's a famous sum called Gauss's sum, and it simplifies to approximately n2 / 2. So we can say that we'll need to do roughly n2 / 2 units of work to solve things this way.



      How does that compare to our 2n solution from before? Well, unlike 2n, which grows stupidly fast as a function of n, n2 / 2 is actually a pretty nice function. If we plug in n = 300, which previously in 2n land gave back "the number of atoms in the universe," we get back a more modest 45,000. If that's a number of nanoseconds, that's nothing; that'll take a computer under a second to do. In fact, you have to plug in a pretty big value of n before you're looking at something that's going to take the computer quite a while to complete.



      The function n2 / 2 has an interesting property compared with 2n. With 2n, if you increase n by one, as we saw earlier, 2n will double. On the other hand, if you take n2 / 2 and increase n by one, then n2 / 2 will get bigger, but not by much (specifically, by n + 1/2).



      By contrast, if you take 2n and then double n, then 2nsquares in size - yikes! But if you take n2 / 2 and double n, then n2 / 2 goes up only by a factor of four - not that bad, actually, given that we doubled our input size!



      This gets at the heart of what the quote you mentioned is talking about. Algorithms with runtimes like 2n, n!, etc. scale terribly as a function of n, since increasing n by one causes a huge jump in the runtime. On the other hand, functions like n, n log n, n2, etc. have the property that if you double n, the runtime only goes up by some constant term. They therefore scale much more nicely as a function of input.






      share|improve this answer


























        0












        0








        0







        Let's consider the following problem: you're given a list of numbers, and you want to find the longest subsequence of that list where the numbers are in ascending order. For example, given the sequence



        2  7  1  8  3  9  4  5  0  6


        you could form the subsequence [2, 7, 8, 9] as follows:



        2  7  1  8  3  9  4  5  0  6
        ^ ^ ^ ^


        but there's an even longer one, [1, 3, 4, 5, 6] available here:



        2  7  1  8  3  9  4  5  0  6
        ^ ^ ^ ^ ^


        That one happens to be the longest subsequence that's in increasing order, I believe, though please let me know if I'm mistaken.



        Now that we have this problem, how would we go about solving it in the general case where you have a list of n numbers? Let's start with a not so great option. One possibility would be to list off all the subsequences of the original list of numbers, then filter out everything that isn't in increasing order, and then to take the longest one out of all the ones we find. For example, given this short list:



        2  7  1  8


        we'd form all the possible subsequences, which are shown here:





        • [8]

        • [1]

        • [1, 8]

        • [7]

        • [7, 8]

        • [7, 1]

        • [7, 1, 8]

        • [2]

        • [2, 8]

        • [2, 1]

        • [2, 1, 8]

        • [2, 7]

        • [2, 7, 8]

        • [2, 7, 1]

        • [2, 7, 1, 8]


        Yikes, that list is pretty long. But by looking at it, we can see that the longest increasing subsequences have length two, and that there are plenty of choices for which one we could pick.



        Now, how well is this going to scale as our input list gets longer and longer? Here's something to think about - how many subsequences are there of this new list, which I made by adding 3 to the end of the existing list?



        2  7  1  8  3


        Well, every existing subsequence is still a perfectly valid subsequence here. But on top of that, we can form a bunch of new subsequences. In fact, we could take any existing subsequence and then tack a 3 onto the end of it. That means that if we had S subsequences for our length-four list, we'll have 2S subsequences for our length-five list.



        More generally, you can see that if you take a list and add one more element onto the end of it, you'll double the number of subsequences available. That's a mathematical fact, and it's neither good nor bad by itself, but if we're in the business of listing all those subsequences and checking each one of them to see whether it has some property, we're going to be in trouble because that means there's going to be a ton of subsequences. We already see that there are 16 subsequences of a four-element list. That means there's 32 subsequences of a five-element list, 64 subsequences of a six-element list, and, more generally, 2n subsequences of an n-element list.



        With that insight, let's make a quick calculation. How many subsequences are we going to have to check if we have, say, a 300-element list? We'd have to potentially check 2300 of them - a number that's bigger than the number of atoms in the observable universe! Oops. That's going to take way more time than we have.



        On the other hand, there's a beautiful algorithm called patience sorting that will always find the longest increasing subsequence, and which does so quite easily. You can do this by playing a little game. You'll place each of the items in the list into one of many piles. To determine what pile to pick, look for the first pile whose top number is bigger than the number in question and place it on top. If you can't find a pile this way, put the number into its own pile on the far right.



        For example, given this original list:



        2  7  1  8  3  9  4  5  0  6


        after playing the game we'd end up with these piles:



        0
        1 3 4 5
        2 7 8 9 6


        And here's an amazing fact: the number of piles used equals the length of the longest increasing subsequence. Moreover, you can find that subsequence in the following way: every time you place a number on top of a pile, make a note of the number that was on top of the pile to its left. If we do this with the above numbers, here's what we'll find; the parenthesized number tells us what was on top of the stack to the left at the time we put the number down:



        0
        1 3 (1) 4 (3) 5 (4)
        2 7 (2) 8 (7) 9 (8) 6 (5)


        To find the subsequence we want, start with the top of the leftmost pile. Write that number down, then find the number in parentheses and repeat this process. Doing that here gives us 6, 5, 4, 3, 1, which, if reversed, is 1, 3, 4, 5, 6, the longest increasing subsequence! (Wow!) You can prove that this works in all cases, and it's a really beautiful exercise to actually go and do this.



        So now the question is how fast this process is. Placing the first number down takes one unit of work - just place it in its own pile. Placing the second number down takes at most two units of work - we have to look at the top of the first pile, and optionally put the number into a second pile. Placing the third number takes at most three units of work - we have to look at up to two piles, and possibly place the number into its own third pile. More generally, placing the kth number down takes k units of work. Overall, this means that the work we're doing is roughly




        1 + 2 + 3 + ... + n




        if we have n total elements. That's a famous sum called Gauss's sum, and it simplifies to approximately n2 / 2. So we can say that we'll need to do roughly n2 / 2 units of work to solve things this way.



        How does that compare to our 2n solution from before? Well, unlike 2n, which grows stupidly fast as a function of n, n2 / 2 is actually a pretty nice function. If we plug in n = 300, which previously in 2n land gave back "the number of atoms in the universe," we get back a more modest 45,000. If that's a number of nanoseconds, that's nothing; that'll take a computer under a second to do. In fact, you have to plug in a pretty big value of n before you're looking at something that's going to take the computer quite a while to complete.



        The function n2 / 2 has an interesting property compared with 2n. With 2n, if you increase n by one, as we saw earlier, 2n will double. On the other hand, if you take n2 / 2 and increase n by one, then n2 / 2 will get bigger, but not by much (specifically, by n + 1/2).



        By contrast, if you take 2n and then double n, then 2nsquares in size - yikes! But if you take n2 / 2 and double n, then n2 / 2 goes up only by a factor of four - not that bad, actually, given that we doubled our input size!



        This gets at the heart of what the quote you mentioned is talking about. Algorithms with runtimes like 2n, n!, etc. scale terribly as a function of n, since increasing n by one causes a huge jump in the runtime. On the other hand, functions like n, n log n, n2, etc. have the property that if you double n, the runtime only goes up by some constant term. They therefore scale much more nicely as a function of input.






        share|improve this answer













        Let's consider the following problem: you're given a list of numbers, and you want to find the longest subsequence of that list where the numbers are in ascending order. For example, given the sequence



        2  7  1  8  3  9  4  5  0  6


        you could form the subsequence [2, 7, 8, 9] as follows:



        2  7  1  8  3  9  4  5  0  6
        ^ ^ ^ ^


        but there's an even longer one, [1, 3, 4, 5, 6] available here:



        2  7  1  8  3  9  4  5  0  6
        ^ ^ ^ ^ ^


        That one happens to be the longest subsequence that's in increasing order, I believe, though please let me know if I'm mistaken.



        Now that we have this problem, how would we go about solving it in the general case where you have a list of n numbers? Let's start with a not so great option. One possibility would be to list off all the subsequences of the original list of numbers, then filter out everything that isn't in increasing order, and then to take the longest one out of all the ones we find. For example, given this short list:



        2  7  1  8


        we'd form all the possible subsequences, which are shown here:





        • [8]

        • [1]

        • [1, 8]

        • [7]

        • [7, 8]

        • [7, 1]

        • [7, 1, 8]

        • [2]

        • [2, 8]

        • [2, 1]

        • [2, 1, 8]

        • [2, 7]

        • [2, 7, 8]

        • [2, 7, 1]

        • [2, 7, 1, 8]


        Yikes, that list is pretty long. But by looking at it, we can see that the longest increasing subsequences have length two, and that there are plenty of choices for which one we could pick.



        Now, how well is this going to scale as our input list gets longer and longer? Here's something to think about - how many subsequences are there of this new list, which I made by adding 3 to the end of the existing list?



        2  7  1  8  3


        Well, every existing subsequence is still a perfectly valid subsequence here. But on top of that, we can form a bunch of new subsequences. In fact, we could take any existing subsequence and then tack a 3 onto the end of it. That means that if we had S subsequences for our length-four list, we'll have 2S subsequences for our length-five list.



        More generally, you can see that if you take a list and add one more element onto the end of it, you'll double the number of subsequences available. That's a mathematical fact, and it's neither good nor bad by itself, but if we're in the business of listing all those subsequences and checking each one of them to see whether it has some property, we're going to be in trouble because that means there's going to be a ton of subsequences. We already see that there are 16 subsequences of a four-element list. That means there's 32 subsequences of a five-element list, 64 subsequences of a six-element list, and, more generally, 2n subsequences of an n-element list.



        With that insight, let's make a quick calculation. How many subsequences are we going to have to check if we have, say, a 300-element list? We'd have to potentially check 2300 of them - a number that's bigger than the number of atoms in the observable universe! Oops. That's going to take way more time than we have.



        On the other hand, there's a beautiful algorithm called patience sorting that will always find the longest increasing subsequence, and which does so quite easily. You can do this by playing a little game. You'll place each of the items in the list into one of many piles. To determine what pile to pick, look for the first pile whose top number is bigger than the number in question and place it on top. If you can't find a pile this way, put the number into its own pile on the far right.



        For example, given this original list:



        2  7  1  8  3  9  4  5  0  6


        after playing the game we'd end up with these piles:



        0
        1 3 4 5
        2 7 8 9 6


        And here's an amazing fact: the number of piles used equals the length of the longest increasing subsequence. Moreover, you can find that subsequence in the following way: every time you place a number on top of a pile, make a note of the number that was on top of the pile to its left. If we do this with the above numbers, here's what we'll find; the parenthesized number tells us what was on top of the stack to the left at the time we put the number down:



        0
        1 3 (1) 4 (3) 5 (4)
        2 7 (2) 8 (7) 9 (8) 6 (5)


        To find the subsequence we want, start with the top of the leftmost pile. Write that number down, then find the number in parentheses and repeat this process. Doing that here gives us 6, 5, 4, 3, 1, which, if reversed, is 1, 3, 4, 5, 6, the longest increasing subsequence! (Wow!) You can prove that this works in all cases, and it's a really beautiful exercise to actually go and do this.



        So now the question is how fast this process is. Placing the first number down takes one unit of work - just place it in its own pile. Placing the second number down takes at most two units of work - we have to look at the top of the first pile, and optionally put the number into a second pile. Placing the third number takes at most three units of work - we have to look at up to two piles, and possibly place the number into its own third pile. More generally, placing the kth number down takes k units of work. Overall, this means that the work we're doing is roughly




        1 + 2 + 3 + ... + n




        if we have n total elements. That's a famous sum called Gauss's sum, and it simplifies to approximately n2 / 2. So we can say that we'll need to do roughly n2 / 2 units of work to solve things this way.



        How does that compare to our 2n solution from before? Well, unlike 2n, which grows stupidly fast as a function of n, n2 / 2 is actually a pretty nice function. If we plug in n = 300, which previously in 2n land gave back "the number of atoms in the universe," we get back a more modest 45,000. If that's a number of nanoseconds, that's nothing; that'll take a computer under a second to do. In fact, you have to plug in a pretty big value of n before you're looking at something that's going to take the computer quite a while to complete.



        The function n2 / 2 has an interesting property compared with 2n. With 2n, if you increase n by one, as we saw earlier, 2n will double. On the other hand, if you take n2 / 2 and increase n by one, then n2 / 2 will get bigger, but not by much (specifically, by n + 1/2).



        By contrast, if you take 2n and then double n, then 2nsquares in size - yikes! But if you take n2 / 2 and double n, then n2 / 2 goes up only by a factor of four - not that bad, actually, given that we doubled our input size!



        This gets at the heart of what the quote you mentioned is talking about. Algorithms with runtimes like 2n, n!, etc. scale terribly as a function of n, since increasing n by one causes a huge jump in the runtime. On the other hand, functions like n, n log n, n2, etc. have the property that if you double n, the runtime only goes up by some constant term. They therefore scale much more nicely as a function of input.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Dec 2 '18 at 5:56









        templatetypedeftemplatetypedef

        265k69673896




        265k69673896
































            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53472733%2funderstanding-the-difference-between-these-two-scaling-properties%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            404 Error Contact Form 7 ajax form submitting

            How to know if a Active Directory user can login interactively

            TypeError: fit_transform() missing 1 required positional argument: 'X'