Ackermann function in Rust

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
2
down vote

favorite












Using the algorithm at the end of this research paper, I've written an extremely fast 1 Ackermann function in Rust. Can the implementation be made faster?



Thoughts:



  • Using a struct makes a lot of sense, but it's weird to have to Some(Box::new(v2)). Can this be avoided in some way?

  • Are for loops the best way of doing the v transformations? Would (1..(i + 1)).fold(Record::new(1, 0, None), |v, k| a_use_p(k - 1, 1, v)) be more "Rustic"? It seems slightly faster, but that's hard to tell.

  • Is there a way to parallelize this? Or any part of it?

Thanks.



extern crate time;

use time::PreciseTime;

fn main()
let m = 3;

for n in 1..20
let s = PreciseTime::now();
let res = fast_ackermann(m, n);
let e = PreciseTime::now();
println!("a_opt(3, ) -> took ", n, res, s.to(e))



#[derive(Clone)]
struct Record
result: u64,
previous_result: u64,
cache: Option<Box<Record>>,


impl Record
fn new(result: u64, previous_result: u64, cache: Option<Box<Record>>) -> Record
Record result, previous_result, cache



fn fast_ackermann(m: u64, n: u64) -> u64
let mut cache = Record::new(1, 0, None);
for m_builder in 0..m
cache = ack_with_incrementalization(m_builder, 1, cache)

for n_builder in 1..(n + 1)
cache = ack_with_incrementalization(m, n_builder, cache)

cache.result


fn ack_with_incrementalization(m: u64, n: u64, current_cache: Record) -> Record
if m == 0
Record::new(n + 1, 0, None)
else if n == 0
let mut new_cache = Record::new(1, 0, None);
for m_builder in 0..m
new_cache = ack_with_incrementalization(m_builder, 1, new_cache);

new_cache
else if n == 1
let cache_result = current_cache.result;
let mut new_cache = current_cache;
for n_builder in 2..(cache_result + 1)
new_cache = ack_with_incrementalization(m - 1, n_builder, new_cache);

Record::new(new_cache.result, cache_result, Some(Box::new(new_cache)))
else
let cache_result = current_cache.result;
let mut new_cache = *current_cache.cache.unwrap();
for n_builder in (current_cache.previous_result + 1)..(current_cache.result + 1)
new_cache = ack_with_incrementalization(m - 1, n_builder, new_cache);

Record::new(new_cache.result, cache_result, Some(Box::new(new_cache)))





1 With m = 3, n = 20, the code takes 0.95 seconds. With n = 30 it takes 16 minutes. In the paper, n = 20 takes 5.05 seconds and n =30 takes 87 minutes. A version with an intermediate cache (in Rust), n = 20 took 2.1 seconds and n = 25 (not 30 like the others) took 39 minutes.







share|improve this question

















  • 1




    Can you quantify "extremely fast"? Very impressive beard btw.
    – yuri
    Mar 1 at 20:24










  • Thanks! With i=3, n=20 takes 0.95 seconds and n=30 takes 16 minutes, whereas in the paper, n=20 takes 5.05 seconds and n=30 takes 87 minutes. Compare to a version with an intermediate cache (in Rust): n=20 took 2.1 seconds and n=25 (not 30 like the others) took 39 minutes.
    – Noah Bogart
    Mar 1 at 21:42






  • 1




    You missed a few a_opt — make sure your code compiles ;-)
    – Shepmaster
    Mar 2 at 17:23










  • My apologies, Zeta. A (now deleted) answer said it was okay as they were the only answer so far. I'll remember to not do so in the future.
    – Noah Bogart
    Mar 2 at 18:00






  • 1




    @NoahBogart I should have diverged from the comment template a little bit more, sorry. Since Shepmaster was fine with the edit and deleted the review, everything's alright, it was just a remark for future questions, no worries.
    – Zeta
    Mar 2 at 19:03
















up vote
2
down vote

favorite












Using the algorithm at the end of this research paper, I've written an extremely fast 1 Ackermann function in Rust. Can the implementation be made faster?



Thoughts:



  • Using a struct makes a lot of sense, but it's weird to have to Some(Box::new(v2)). Can this be avoided in some way?

  • Are for loops the best way of doing the v transformations? Would (1..(i + 1)).fold(Record::new(1, 0, None), |v, k| a_use_p(k - 1, 1, v)) be more "Rustic"? It seems slightly faster, but that's hard to tell.

  • Is there a way to parallelize this? Or any part of it?

Thanks.



extern crate time;

use time::PreciseTime;

fn main()
let m = 3;

for n in 1..20
let s = PreciseTime::now();
let res = fast_ackermann(m, n);
let e = PreciseTime::now();
println!("a_opt(3, ) -> took ", n, res, s.to(e))



#[derive(Clone)]
struct Record
result: u64,
previous_result: u64,
cache: Option<Box<Record>>,


impl Record
fn new(result: u64, previous_result: u64, cache: Option<Box<Record>>) -> Record
Record result, previous_result, cache



fn fast_ackermann(m: u64, n: u64) -> u64
let mut cache = Record::new(1, 0, None);
for m_builder in 0..m
cache = ack_with_incrementalization(m_builder, 1, cache)

for n_builder in 1..(n + 1)
cache = ack_with_incrementalization(m, n_builder, cache)

cache.result


fn ack_with_incrementalization(m: u64, n: u64, current_cache: Record) -> Record
if m == 0
Record::new(n + 1, 0, None)
else if n == 0
let mut new_cache = Record::new(1, 0, None);
for m_builder in 0..m
new_cache = ack_with_incrementalization(m_builder, 1, new_cache);

new_cache
else if n == 1
let cache_result = current_cache.result;
let mut new_cache = current_cache;
for n_builder in 2..(cache_result + 1)
new_cache = ack_with_incrementalization(m - 1, n_builder, new_cache);

Record::new(new_cache.result, cache_result, Some(Box::new(new_cache)))
else
let cache_result = current_cache.result;
let mut new_cache = *current_cache.cache.unwrap();
for n_builder in (current_cache.previous_result + 1)..(current_cache.result + 1)
new_cache = ack_with_incrementalization(m - 1, n_builder, new_cache);

Record::new(new_cache.result, cache_result, Some(Box::new(new_cache)))





1 With m = 3, n = 20, the code takes 0.95 seconds. With n = 30 it takes 16 minutes. In the paper, n = 20 takes 5.05 seconds and n =30 takes 87 minutes. A version with an intermediate cache (in Rust), n = 20 took 2.1 seconds and n = 25 (not 30 like the others) took 39 minutes.







share|improve this question

















  • 1




    Can you quantify "extremely fast"? Very impressive beard btw.
    – yuri
    Mar 1 at 20:24










  • Thanks! With i=3, n=20 takes 0.95 seconds and n=30 takes 16 minutes, whereas in the paper, n=20 takes 5.05 seconds and n=30 takes 87 minutes. Compare to a version with an intermediate cache (in Rust): n=20 took 2.1 seconds and n=25 (not 30 like the others) took 39 minutes.
    – Noah Bogart
    Mar 1 at 21:42






  • 1




    You missed a few a_opt — make sure your code compiles ;-)
    – Shepmaster
    Mar 2 at 17:23










  • My apologies, Zeta. A (now deleted) answer said it was okay as they were the only answer so far. I'll remember to not do so in the future.
    – Noah Bogart
    Mar 2 at 18:00






  • 1




    @NoahBogart I should have diverged from the comment template a little bit more, sorry. Since Shepmaster was fine with the edit and deleted the review, everything's alright, it was just a remark for future questions, no worries.
    – Zeta
    Mar 2 at 19:03












up vote
2
down vote

favorite









up vote
2
down vote

favorite











Using the algorithm at the end of this research paper, I've written an extremely fast 1 Ackermann function in Rust. Can the implementation be made faster?



Thoughts:



  • Using a struct makes a lot of sense, but it's weird to have to Some(Box::new(v2)). Can this be avoided in some way?

  • Are for loops the best way of doing the v transformations? Would (1..(i + 1)).fold(Record::new(1, 0, None), |v, k| a_use_p(k - 1, 1, v)) be more "Rustic"? It seems slightly faster, but that's hard to tell.

  • Is there a way to parallelize this? Or any part of it?

Thanks.



extern crate time;

use time::PreciseTime;

fn main()
let m = 3;

for n in 1..20
let s = PreciseTime::now();
let res = fast_ackermann(m, n);
let e = PreciseTime::now();
println!("a_opt(3, ) -> took ", n, res, s.to(e))



#[derive(Clone)]
struct Record
result: u64,
previous_result: u64,
cache: Option<Box<Record>>,


impl Record
fn new(result: u64, previous_result: u64, cache: Option<Box<Record>>) -> Record
Record result, previous_result, cache



fn fast_ackermann(m: u64, n: u64) -> u64
let mut cache = Record::new(1, 0, None);
for m_builder in 0..m
cache = ack_with_incrementalization(m_builder, 1, cache)

for n_builder in 1..(n + 1)
cache = ack_with_incrementalization(m, n_builder, cache)

cache.result


fn ack_with_incrementalization(m: u64, n: u64, current_cache: Record) -> Record
if m == 0
Record::new(n + 1, 0, None)
else if n == 0
let mut new_cache = Record::new(1, 0, None);
for m_builder in 0..m
new_cache = ack_with_incrementalization(m_builder, 1, new_cache);

new_cache
else if n == 1
let cache_result = current_cache.result;
let mut new_cache = current_cache;
for n_builder in 2..(cache_result + 1)
new_cache = ack_with_incrementalization(m - 1, n_builder, new_cache);

Record::new(new_cache.result, cache_result, Some(Box::new(new_cache)))
else
let cache_result = current_cache.result;
let mut new_cache = *current_cache.cache.unwrap();
for n_builder in (current_cache.previous_result + 1)..(current_cache.result + 1)
new_cache = ack_with_incrementalization(m - 1, n_builder, new_cache);

Record::new(new_cache.result, cache_result, Some(Box::new(new_cache)))





1 With m = 3, n = 20, the code takes 0.95 seconds. With n = 30 it takes 16 minutes. In the paper, n = 20 takes 5.05 seconds and n =30 takes 87 minutes. A version with an intermediate cache (in Rust), n = 20 took 2.1 seconds and n = 25 (not 30 like the others) took 39 minutes.







share|improve this question













Using the algorithm at the end of this research paper, I've written an extremely fast 1 Ackermann function in Rust. Can the implementation be made faster?



Thoughts:



  • Using a struct makes a lot of sense, but it's weird to have to Some(Box::new(v2)). Can this be avoided in some way?

  • Are for loops the best way of doing the v transformations? Would (1..(i + 1)).fold(Record::new(1, 0, None), |v, k| a_use_p(k - 1, 1, v)) be more "Rustic"? It seems slightly faster, but that's hard to tell.

  • Is there a way to parallelize this? Or any part of it?

Thanks.



extern crate time;

use time::PreciseTime;

fn main()
let m = 3;

for n in 1..20
let s = PreciseTime::now();
let res = fast_ackermann(m, n);
let e = PreciseTime::now();
println!("a_opt(3, ) -> took ", n, res, s.to(e))



#[derive(Clone)]
struct Record
result: u64,
previous_result: u64,
cache: Option<Box<Record>>,


impl Record
fn new(result: u64, previous_result: u64, cache: Option<Box<Record>>) -> Record
Record result, previous_result, cache



fn fast_ackermann(m: u64, n: u64) -> u64
let mut cache = Record::new(1, 0, None);
for m_builder in 0..m
cache = ack_with_incrementalization(m_builder, 1, cache)

for n_builder in 1..(n + 1)
cache = ack_with_incrementalization(m, n_builder, cache)

cache.result


fn ack_with_incrementalization(m: u64, n: u64, current_cache: Record) -> Record
if m == 0
Record::new(n + 1, 0, None)
else if n == 0
let mut new_cache = Record::new(1, 0, None);
for m_builder in 0..m
new_cache = ack_with_incrementalization(m_builder, 1, new_cache);

new_cache
else if n == 1
let cache_result = current_cache.result;
let mut new_cache = current_cache;
for n_builder in 2..(cache_result + 1)
new_cache = ack_with_incrementalization(m - 1, n_builder, new_cache);

Record::new(new_cache.result, cache_result, Some(Box::new(new_cache)))
else
let cache_result = current_cache.result;
let mut new_cache = *current_cache.cache.unwrap();
for n_builder in (current_cache.previous_result + 1)..(current_cache.result + 1)
new_cache = ack_with_incrementalization(m - 1, n_builder, new_cache);

Record::new(new_cache.result, cache_result, Some(Box::new(new_cache)))





1 With m = 3, n = 20, the code takes 0.95 seconds. With n = 30 it takes 16 minutes. In the paper, n = 20 takes 5.05 seconds and n =30 takes 87 minutes. A version with an intermediate cache (in Rust), n = 20 took 2.1 seconds and n = 25 (not 30 like the others) took 39 minutes.









share|improve this question












share|improve this question




share|improve this question








edited Mar 2 at 18:46









Shepmaster

6,5981018




6,5981018









asked Mar 1 at 20:01









Noah Bogart

1536




1536







  • 1




    Can you quantify "extremely fast"? Very impressive beard btw.
    – yuri
    Mar 1 at 20:24










  • Thanks! With i=3, n=20 takes 0.95 seconds and n=30 takes 16 minutes, whereas in the paper, n=20 takes 5.05 seconds and n=30 takes 87 minutes. Compare to a version with an intermediate cache (in Rust): n=20 took 2.1 seconds and n=25 (not 30 like the others) took 39 minutes.
    – Noah Bogart
    Mar 1 at 21:42






  • 1




    You missed a few a_opt — make sure your code compiles ;-)
    – Shepmaster
    Mar 2 at 17:23










  • My apologies, Zeta. A (now deleted) answer said it was okay as they were the only answer so far. I'll remember to not do so in the future.
    – Noah Bogart
    Mar 2 at 18:00






  • 1




    @NoahBogart I should have diverged from the comment template a little bit more, sorry. Since Shepmaster was fine with the edit and deleted the review, everything's alright, it was just a remark for future questions, no worries.
    – Zeta
    Mar 2 at 19:03












  • 1




    Can you quantify "extremely fast"? Very impressive beard btw.
    – yuri
    Mar 1 at 20:24










  • Thanks! With i=3, n=20 takes 0.95 seconds and n=30 takes 16 minutes, whereas in the paper, n=20 takes 5.05 seconds and n=30 takes 87 minutes. Compare to a version with an intermediate cache (in Rust): n=20 took 2.1 seconds and n=25 (not 30 like the others) took 39 minutes.
    – Noah Bogart
    Mar 1 at 21:42






  • 1




    You missed a few a_opt — make sure your code compiles ;-)
    – Shepmaster
    Mar 2 at 17:23










  • My apologies, Zeta. A (now deleted) answer said it was okay as they were the only answer so far. I'll remember to not do so in the future.
    – Noah Bogart
    Mar 2 at 18:00






  • 1




    @NoahBogart I should have diverged from the comment template a little bit more, sorry. Since Shepmaster was fine with the edit and deleted the review, everything's alright, it was just a remark for future questions, no worries.
    – Zeta
    Mar 2 at 19:03







1




1




Can you quantify "extremely fast"? Very impressive beard btw.
– yuri
Mar 1 at 20:24




Can you quantify "extremely fast"? Very impressive beard btw.
– yuri
Mar 1 at 20:24












Thanks! With i=3, n=20 takes 0.95 seconds and n=30 takes 16 minutes, whereas in the paper, n=20 takes 5.05 seconds and n=30 takes 87 minutes. Compare to a version with an intermediate cache (in Rust): n=20 took 2.1 seconds and n=25 (not 30 like the others) took 39 minutes.
– Noah Bogart
Mar 1 at 21:42




Thanks! With i=3, n=20 takes 0.95 seconds and n=30 takes 16 minutes, whereas in the paper, n=20 takes 5.05 seconds and n=30 takes 87 minutes. Compare to a version with an intermediate cache (in Rust): n=20 took 2.1 seconds and n=25 (not 30 like the others) took 39 minutes.
– Noah Bogart
Mar 1 at 21:42




1




1




You missed a few a_opt — make sure your code compiles ;-)
– Shepmaster
Mar 2 at 17:23




You missed a few a_opt — make sure your code compiles ;-)
– Shepmaster
Mar 2 at 17:23












My apologies, Zeta. A (now deleted) answer said it was okay as they were the only answer so far. I'll remember to not do so in the future.
– Noah Bogart
Mar 2 at 18:00




My apologies, Zeta. A (now deleted) answer said it was okay as they were the only answer so far. I'll remember to not do so in the future.
– Noah Bogart
Mar 2 at 18:00




1




1




@NoahBogart I should have diverged from the comment template a little bit more, sorry. Since Shepmaster was fine with the edit and deleted the review, everything's alright, it was just a remark for future questions, no worries.
– Zeta
Mar 2 at 19:03




@NoahBogart I should have diverged from the comment template a little bit more, sorry. Since Shepmaster was fine with the edit and deleted the review, everything's alright, it was just a remark for future questions, no worries.
– Zeta
Mar 2 at 19:03










1 Answer
1






active

oldest

votes

















up vote
2
down vote



accepted










I realized that the nested Records never go more than 3 deep (outermost, middle, innermost), which means it could be rewritten to use primitive arrays.



Comparison 1:



n: 20
fas_ack -> 8388605 took PT0.399960323S
arr_ack -> 8388605 took PT0.115302129S
n: 25
fas_ack -> 268435453 took PT11.283535003S
arr_ack -> 268435453 took PT3.553065491S
n: 30
fas_ack -> 8589934589 took PT757.605898139S
arr_ack -> 8589934589 took PT233.040801010S


Comparison 2:



n: 20
fas_ack -> 8388605 took PT0.807311828S
arr_ack -> 8388605 took PT0.442201425S
n: 25
fas_ack -> 268435453 took PT35.765848937S
arr_ack -> 268435453 took PT14.004015773S
n: 30
fas_ack -> 8589934589 took PT959.685955407S
arr_ack -> 8589934589 took PT141.390498532S


Updated code:



extern crate time;

use time::PreciseTime;

fn main()
let m = 3;

for n in [20, 25, 30].iter()
println!("n: ", n);
let s = PreciseTime::now();
let res = arr_ack(m, *n);
let e = PreciseTime::now();
println!("arr_ack -> took ", res, s.to(e))



fn arr_ack(m: u64, n: u64) -> u64
let mut cache: [u64; 6] = [0; 6];
cache[0] = 1;
for m_builder in 0..m
cache = _arr_ack(m_builder, 1, cache)

for n_builder in 1..(n + 1)
cache = _arr_ack(m, n_builder, cache)

cache[0]


fn _arr_ack(m: u64, n: u64, mut cache: [u64; 6]) -> [u64; 6]
if m == 0
[n + 1, 0, 0, 0, 0, 0]
else if m == 1
[n + 2, 0, 0, 0, 0, 0]
else if n == 0
let mut new_cache = [1, 0, 0, 0, 0, 0];
for m_builder in 0..m
new_cache = _arr_ack(m_builder, 1, new_cache);

new_cache
else if n == 1
let previous_result = cache[0];
for n_builder in 2..(previous_result + 1)
cache = _arr_ack(m - 1, n_builder, cache);

[cache[0], previous_result, cache[0], cache[1], cache[2], cache[3]]
else
let previous_result = cache[0];
let mut new_cache = [cache[2], cache[3], cache[4], cache[5], 0, 0];
for n_builder in (cache[1] + 1)..(previous_result + 1)
new_cache = _arr_ack(m - 1, n_builder, new_cache);

[new_cache[0], previous_result, new_cache[0], new_cache[1], new_cache[2], new_cache[3]]







share|improve this answer





















    Your Answer




    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "196"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: false,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );








     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f188638%2fackermann-function-in-rust%23new-answer', 'question_page');

    );

    Post as a guest






























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    2
    down vote



    accepted










    I realized that the nested Records never go more than 3 deep (outermost, middle, innermost), which means it could be rewritten to use primitive arrays.



    Comparison 1:



    n: 20
    fas_ack -> 8388605 took PT0.399960323S
    arr_ack -> 8388605 took PT0.115302129S
    n: 25
    fas_ack -> 268435453 took PT11.283535003S
    arr_ack -> 268435453 took PT3.553065491S
    n: 30
    fas_ack -> 8589934589 took PT757.605898139S
    arr_ack -> 8589934589 took PT233.040801010S


    Comparison 2:



    n: 20
    fas_ack -> 8388605 took PT0.807311828S
    arr_ack -> 8388605 took PT0.442201425S
    n: 25
    fas_ack -> 268435453 took PT35.765848937S
    arr_ack -> 268435453 took PT14.004015773S
    n: 30
    fas_ack -> 8589934589 took PT959.685955407S
    arr_ack -> 8589934589 took PT141.390498532S


    Updated code:



    extern crate time;

    use time::PreciseTime;

    fn main()
    let m = 3;

    for n in [20, 25, 30].iter()
    println!("n: ", n);
    let s = PreciseTime::now();
    let res = arr_ack(m, *n);
    let e = PreciseTime::now();
    println!("arr_ack -> took ", res, s.to(e))



    fn arr_ack(m: u64, n: u64) -> u64
    let mut cache: [u64; 6] = [0; 6];
    cache[0] = 1;
    for m_builder in 0..m
    cache = _arr_ack(m_builder, 1, cache)

    for n_builder in 1..(n + 1)
    cache = _arr_ack(m, n_builder, cache)

    cache[0]


    fn _arr_ack(m: u64, n: u64, mut cache: [u64; 6]) -> [u64; 6]
    if m == 0
    [n + 1, 0, 0, 0, 0, 0]
    else if m == 1
    [n + 2, 0, 0, 0, 0, 0]
    else if n == 0
    let mut new_cache = [1, 0, 0, 0, 0, 0];
    for m_builder in 0..m
    new_cache = _arr_ack(m_builder, 1, new_cache);

    new_cache
    else if n == 1
    let previous_result = cache[0];
    for n_builder in 2..(previous_result + 1)
    cache = _arr_ack(m - 1, n_builder, cache);

    [cache[0], previous_result, cache[0], cache[1], cache[2], cache[3]]
    else
    let previous_result = cache[0];
    let mut new_cache = [cache[2], cache[3], cache[4], cache[5], 0, 0];
    for n_builder in (cache[1] + 1)..(previous_result + 1)
    new_cache = _arr_ack(m - 1, n_builder, new_cache);

    [new_cache[0], previous_result, new_cache[0], new_cache[1], new_cache[2], new_cache[3]]







    share|improve this answer

























      up vote
      2
      down vote



      accepted










      I realized that the nested Records never go more than 3 deep (outermost, middle, innermost), which means it could be rewritten to use primitive arrays.



      Comparison 1:



      n: 20
      fas_ack -> 8388605 took PT0.399960323S
      arr_ack -> 8388605 took PT0.115302129S
      n: 25
      fas_ack -> 268435453 took PT11.283535003S
      arr_ack -> 268435453 took PT3.553065491S
      n: 30
      fas_ack -> 8589934589 took PT757.605898139S
      arr_ack -> 8589934589 took PT233.040801010S


      Comparison 2:



      n: 20
      fas_ack -> 8388605 took PT0.807311828S
      arr_ack -> 8388605 took PT0.442201425S
      n: 25
      fas_ack -> 268435453 took PT35.765848937S
      arr_ack -> 268435453 took PT14.004015773S
      n: 30
      fas_ack -> 8589934589 took PT959.685955407S
      arr_ack -> 8589934589 took PT141.390498532S


      Updated code:



      extern crate time;

      use time::PreciseTime;

      fn main()
      let m = 3;

      for n in [20, 25, 30].iter()
      println!("n: ", n);
      let s = PreciseTime::now();
      let res = arr_ack(m, *n);
      let e = PreciseTime::now();
      println!("arr_ack -> took ", res, s.to(e))



      fn arr_ack(m: u64, n: u64) -> u64
      let mut cache: [u64; 6] = [0; 6];
      cache[0] = 1;
      for m_builder in 0..m
      cache = _arr_ack(m_builder, 1, cache)

      for n_builder in 1..(n + 1)
      cache = _arr_ack(m, n_builder, cache)

      cache[0]


      fn _arr_ack(m: u64, n: u64, mut cache: [u64; 6]) -> [u64; 6]
      if m == 0
      [n + 1, 0, 0, 0, 0, 0]
      else if m == 1
      [n + 2, 0, 0, 0, 0, 0]
      else if n == 0
      let mut new_cache = [1, 0, 0, 0, 0, 0];
      for m_builder in 0..m
      new_cache = _arr_ack(m_builder, 1, new_cache);

      new_cache
      else if n == 1
      let previous_result = cache[0];
      for n_builder in 2..(previous_result + 1)
      cache = _arr_ack(m - 1, n_builder, cache);

      [cache[0], previous_result, cache[0], cache[1], cache[2], cache[3]]
      else
      let previous_result = cache[0];
      let mut new_cache = [cache[2], cache[3], cache[4], cache[5], 0, 0];
      for n_builder in (cache[1] + 1)..(previous_result + 1)
      new_cache = _arr_ack(m - 1, n_builder, new_cache);

      [new_cache[0], previous_result, new_cache[0], new_cache[1], new_cache[2], new_cache[3]]







      share|improve this answer























        up vote
        2
        down vote



        accepted







        up vote
        2
        down vote



        accepted






        I realized that the nested Records never go more than 3 deep (outermost, middle, innermost), which means it could be rewritten to use primitive arrays.



        Comparison 1:



        n: 20
        fas_ack -> 8388605 took PT0.399960323S
        arr_ack -> 8388605 took PT0.115302129S
        n: 25
        fas_ack -> 268435453 took PT11.283535003S
        arr_ack -> 268435453 took PT3.553065491S
        n: 30
        fas_ack -> 8589934589 took PT757.605898139S
        arr_ack -> 8589934589 took PT233.040801010S


        Comparison 2:



        n: 20
        fas_ack -> 8388605 took PT0.807311828S
        arr_ack -> 8388605 took PT0.442201425S
        n: 25
        fas_ack -> 268435453 took PT35.765848937S
        arr_ack -> 268435453 took PT14.004015773S
        n: 30
        fas_ack -> 8589934589 took PT959.685955407S
        arr_ack -> 8589934589 took PT141.390498532S


        Updated code:



        extern crate time;

        use time::PreciseTime;

        fn main()
        let m = 3;

        for n in [20, 25, 30].iter()
        println!("n: ", n);
        let s = PreciseTime::now();
        let res = arr_ack(m, *n);
        let e = PreciseTime::now();
        println!("arr_ack -> took ", res, s.to(e))



        fn arr_ack(m: u64, n: u64) -> u64
        let mut cache: [u64; 6] = [0; 6];
        cache[0] = 1;
        for m_builder in 0..m
        cache = _arr_ack(m_builder, 1, cache)

        for n_builder in 1..(n + 1)
        cache = _arr_ack(m, n_builder, cache)

        cache[0]


        fn _arr_ack(m: u64, n: u64, mut cache: [u64; 6]) -> [u64; 6]
        if m == 0
        [n + 1, 0, 0, 0, 0, 0]
        else if m == 1
        [n + 2, 0, 0, 0, 0, 0]
        else if n == 0
        let mut new_cache = [1, 0, 0, 0, 0, 0];
        for m_builder in 0..m
        new_cache = _arr_ack(m_builder, 1, new_cache);

        new_cache
        else if n == 1
        let previous_result = cache[0];
        for n_builder in 2..(previous_result + 1)
        cache = _arr_ack(m - 1, n_builder, cache);

        [cache[0], previous_result, cache[0], cache[1], cache[2], cache[3]]
        else
        let previous_result = cache[0];
        let mut new_cache = [cache[2], cache[3], cache[4], cache[5], 0, 0];
        for n_builder in (cache[1] + 1)..(previous_result + 1)
        new_cache = _arr_ack(m - 1, n_builder, new_cache);

        [new_cache[0], previous_result, new_cache[0], new_cache[1], new_cache[2], new_cache[3]]







        share|improve this answer













        I realized that the nested Records never go more than 3 deep (outermost, middle, innermost), which means it could be rewritten to use primitive arrays.



        Comparison 1:



        n: 20
        fas_ack -> 8388605 took PT0.399960323S
        arr_ack -> 8388605 took PT0.115302129S
        n: 25
        fas_ack -> 268435453 took PT11.283535003S
        arr_ack -> 268435453 took PT3.553065491S
        n: 30
        fas_ack -> 8589934589 took PT757.605898139S
        arr_ack -> 8589934589 took PT233.040801010S


        Comparison 2:



        n: 20
        fas_ack -> 8388605 took PT0.807311828S
        arr_ack -> 8388605 took PT0.442201425S
        n: 25
        fas_ack -> 268435453 took PT35.765848937S
        arr_ack -> 268435453 took PT14.004015773S
        n: 30
        fas_ack -> 8589934589 took PT959.685955407S
        arr_ack -> 8589934589 took PT141.390498532S


        Updated code:



        extern crate time;

        use time::PreciseTime;

        fn main()
        let m = 3;

        for n in [20, 25, 30].iter()
        println!("n: ", n);
        let s = PreciseTime::now();
        let res = arr_ack(m, *n);
        let e = PreciseTime::now();
        println!("arr_ack -> took ", res, s.to(e))



        fn arr_ack(m: u64, n: u64) -> u64
        let mut cache: [u64; 6] = [0; 6];
        cache[0] = 1;
        for m_builder in 0..m
        cache = _arr_ack(m_builder, 1, cache)

        for n_builder in 1..(n + 1)
        cache = _arr_ack(m, n_builder, cache)

        cache[0]


        fn _arr_ack(m: u64, n: u64, mut cache: [u64; 6]) -> [u64; 6]
        if m == 0
        [n + 1, 0, 0, 0, 0, 0]
        else if m == 1
        [n + 2, 0, 0, 0, 0, 0]
        else if n == 0
        let mut new_cache = [1, 0, 0, 0, 0, 0];
        for m_builder in 0..m
        new_cache = _arr_ack(m_builder, 1, new_cache);

        new_cache
        else if n == 1
        let previous_result = cache[0];
        for n_builder in 2..(previous_result + 1)
        cache = _arr_ack(m - 1, n_builder, cache);

        [cache[0], previous_result, cache[0], cache[1], cache[2], cache[3]]
        else
        let previous_result = cache[0];
        let mut new_cache = [cache[2], cache[3], cache[4], cache[5], 0, 0];
        for n_builder in (cache[1] + 1)..(previous_result + 1)
        new_cache = _arr_ack(m - 1, n_builder, new_cache);

        [new_cache[0], previous_result, new_cache[0], new_cache[1], new_cache[2], new_cache[3]]








        share|improve this answer













        share|improve this answer



        share|improve this answer











        answered Mar 2 at 20:17









        Noah Bogart

        1536




        1536






















             

            draft saved


            draft discarded


























             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f188638%2fackermann-function-in-rust%23new-answer', 'question_page');

            );

            Post as a guest













































































            Popular posts from this blog

            Greedy Best First Search implementation in Rust

            Function to Return a JSON Like Objects Using VBA Collections and Arrays

            C++11 CLH Lock Implementation