Queue-based fixed-duration memory cache

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;







up vote
3
down vote

favorite












I used to use the MemoryCache in .Net but recently read about a DB using using a linked-list, which only had to check the prefix of the list in $O(1)$ complexity, to time values out, as opposed to ln N; the heap-based approach that's default for variable timeout, min-heaps.



I'd appreciate a review of this code; in terms of how optimal it is and if there are obvious mistakes in it, that I'm missing. The general idea is that count(R) >>> count(W).



I considered using ReaderWriterLockSlim in upgradeable mode, but I may very well have two threads with read locks trying to acquire a write upgrade simultaneously in this piece of code, so I decided against. The check of the queue head is supposed to be quick enough to avoid contention around this resource.



This code is intended to be used in a low-latency F# API as part of Logibit Hawk.



I'd also like this code to be non-allocating, besides the single int64 that's the Instant (x2) and the allocated-outside string reference. Returning Choice-values (allocating) is intentional however, and will be changed to Result in the future.



module Settings =
open NodaTime
open System.Collections.Generic
open System.Collections.Concurrent

let private tsem = obj ()
let private runTimeouts (timeouts: Queue<_>) (cache: ConcurrentDictionary<string, Instant>) (now: Instant) =
/// Iterate until the head is due later than now. Tail-recursion needed.
let rec iter () =
if timeouts.Count = 0 then () else
let struct (nonce, timeout) = timeouts.Peek()
if timeout > now then () else
ignore (timeouts.Dequeue())
ignore (cache.TryRemove nonce)
iter ()
lock tsem iter

let nonceValidatorMem (clock: IClock) (keepFor: Duration) =
// We will use a queue, because it's O(1) on dequeue/peek and since the duration is a constant for the
// life time of this validator, we know the head is due closest in time.
let timeouts = Queue<_>()
// The cache stores the nonces and when they were added
let cache = ConcurrentDictionary<_, _>()

fun (nonce: string, ts: Instant) ->
let now = clock.GetCurrentInstant()
do runTimeouts timeouts cache now
// https://docs.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentdictionary-2.tryadd?view=netframework-4.7.1
// returns true if the nonce was successfully added
if cache.TryAdd (nonce, now) then
do lock tsem (fun () -> timeouts.Enqueue (struct (nonce, now + keepFor)))
Choice1Of2 ()
else
Choice2Of2 AlreadySeen






share|improve this question





















  • I'd performance test it against a version based on a doubly linked list instead of the Queue.
    – Ant
    May 10 at 22:03
















up vote
3
down vote

favorite












I used to use the MemoryCache in .Net but recently read about a DB using using a linked-list, which only had to check the prefix of the list in $O(1)$ complexity, to time values out, as opposed to ln N; the heap-based approach that's default for variable timeout, min-heaps.



I'd appreciate a review of this code; in terms of how optimal it is and if there are obvious mistakes in it, that I'm missing. The general idea is that count(R) >>> count(W).



I considered using ReaderWriterLockSlim in upgradeable mode, but I may very well have two threads with read locks trying to acquire a write upgrade simultaneously in this piece of code, so I decided against. The check of the queue head is supposed to be quick enough to avoid contention around this resource.



This code is intended to be used in a low-latency F# API as part of Logibit Hawk.



I'd also like this code to be non-allocating, besides the single int64 that's the Instant (x2) and the allocated-outside string reference. Returning Choice-values (allocating) is intentional however, and will be changed to Result in the future.



module Settings =
open NodaTime
open System.Collections.Generic
open System.Collections.Concurrent

let private tsem = obj ()
let private runTimeouts (timeouts: Queue<_>) (cache: ConcurrentDictionary<string, Instant>) (now: Instant) =
/// Iterate until the head is due later than now. Tail-recursion needed.
let rec iter () =
if timeouts.Count = 0 then () else
let struct (nonce, timeout) = timeouts.Peek()
if timeout > now then () else
ignore (timeouts.Dequeue())
ignore (cache.TryRemove nonce)
iter ()
lock tsem iter

let nonceValidatorMem (clock: IClock) (keepFor: Duration) =
// We will use a queue, because it's O(1) on dequeue/peek and since the duration is a constant for the
// life time of this validator, we know the head is due closest in time.
let timeouts = Queue<_>()
// The cache stores the nonces and when they were added
let cache = ConcurrentDictionary<_, _>()

fun (nonce: string, ts: Instant) ->
let now = clock.GetCurrentInstant()
do runTimeouts timeouts cache now
// https://docs.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentdictionary-2.tryadd?view=netframework-4.7.1
// returns true if the nonce was successfully added
if cache.TryAdd (nonce, now) then
do lock tsem (fun () -> timeouts.Enqueue (struct (nonce, now + keepFor)))
Choice1Of2 ()
else
Choice2Of2 AlreadySeen






share|improve this question





















  • I'd performance test it against a version based on a doubly linked list instead of the Queue.
    – Ant
    May 10 at 22:03












up vote
3
down vote

favorite









up vote
3
down vote

favorite











I used to use the MemoryCache in .Net but recently read about a DB using using a linked-list, which only had to check the prefix of the list in $O(1)$ complexity, to time values out, as opposed to ln N; the heap-based approach that's default for variable timeout, min-heaps.



I'd appreciate a review of this code; in terms of how optimal it is and if there are obvious mistakes in it, that I'm missing. The general idea is that count(R) >>> count(W).



I considered using ReaderWriterLockSlim in upgradeable mode, but I may very well have two threads with read locks trying to acquire a write upgrade simultaneously in this piece of code, so I decided against. The check of the queue head is supposed to be quick enough to avoid contention around this resource.



This code is intended to be used in a low-latency F# API as part of Logibit Hawk.



I'd also like this code to be non-allocating, besides the single int64 that's the Instant (x2) and the allocated-outside string reference. Returning Choice-values (allocating) is intentional however, and will be changed to Result in the future.



module Settings =
open NodaTime
open System.Collections.Generic
open System.Collections.Concurrent

let private tsem = obj ()
let private runTimeouts (timeouts: Queue<_>) (cache: ConcurrentDictionary<string, Instant>) (now: Instant) =
/// Iterate until the head is due later than now. Tail-recursion needed.
let rec iter () =
if timeouts.Count = 0 then () else
let struct (nonce, timeout) = timeouts.Peek()
if timeout > now then () else
ignore (timeouts.Dequeue())
ignore (cache.TryRemove nonce)
iter ()
lock tsem iter

let nonceValidatorMem (clock: IClock) (keepFor: Duration) =
// We will use a queue, because it's O(1) on dequeue/peek and since the duration is a constant for the
// life time of this validator, we know the head is due closest in time.
let timeouts = Queue<_>()
// The cache stores the nonces and when they were added
let cache = ConcurrentDictionary<_, _>()

fun (nonce: string, ts: Instant) ->
let now = clock.GetCurrentInstant()
do runTimeouts timeouts cache now
// https://docs.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentdictionary-2.tryadd?view=netframework-4.7.1
// returns true if the nonce was successfully added
if cache.TryAdd (nonce, now) then
do lock tsem (fun () -> timeouts.Enqueue (struct (nonce, now + keepFor)))
Choice1Of2 ()
else
Choice2Of2 AlreadySeen






share|improve this question













I used to use the MemoryCache in .Net but recently read about a DB using using a linked-list, which only had to check the prefix of the list in $O(1)$ complexity, to time values out, as opposed to ln N; the heap-based approach that's default for variable timeout, min-heaps.



I'd appreciate a review of this code; in terms of how optimal it is and if there are obvious mistakes in it, that I'm missing. The general idea is that count(R) >>> count(W).



I considered using ReaderWriterLockSlim in upgradeable mode, but I may very well have two threads with read locks trying to acquire a write upgrade simultaneously in this piece of code, so I decided against. The check of the queue head is supposed to be quick enough to avoid contention around this resource.



This code is intended to be used in a low-latency F# API as part of Logibit Hawk.



I'd also like this code to be non-allocating, besides the single int64 that's the Instant (x2) and the allocated-outside string reference. Returning Choice-values (allocating) is intentional however, and will be changed to Result in the future.



module Settings =
open NodaTime
open System.Collections.Generic
open System.Collections.Concurrent

let private tsem = obj ()
let private runTimeouts (timeouts: Queue<_>) (cache: ConcurrentDictionary<string, Instant>) (now: Instant) =
/// Iterate until the head is due later than now. Tail-recursion needed.
let rec iter () =
if timeouts.Count = 0 then () else
let struct (nonce, timeout) = timeouts.Peek()
if timeout > now then () else
ignore (timeouts.Dequeue())
ignore (cache.TryRemove nonce)
iter ()
lock tsem iter

let nonceValidatorMem (clock: IClock) (keepFor: Duration) =
// We will use a queue, because it's O(1) on dequeue/peek and since the duration is a constant for the
// life time of this validator, we know the head is due closest in time.
let timeouts = Queue<_>()
// The cache stores the nonces and when they were added
let cache = ConcurrentDictionary<_, _>()

fun (nonce: string, ts: Instant) ->
let now = clock.GetCurrentInstant()
do runTimeouts timeouts cache now
// https://docs.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentdictionary-2.tryadd?view=netframework-4.7.1
// returns true if the nonce was successfully added
if cache.TryAdd (nonce, now) then
do lock tsem (fun () -> timeouts.Enqueue (struct (nonce, now + keepFor)))
Choice1Of2 ()
else
Choice2Of2 AlreadySeen








share|improve this question












share|improve this question




share|improve this question








edited Jun 19 at 10:05









Billal BEGUERADJ

1




1









asked May 1 at 17:35









Henrik

416




416











  • I'd performance test it against a version based on a doubly linked list instead of the Queue.
    – Ant
    May 10 at 22:03
















  • I'd performance test it against a version based on a doubly linked list instead of the Queue.
    – Ant
    May 10 at 22:03















I'd performance test it against a version based on a doubly linked list instead of the Queue.
– Ant
May 10 at 22:03




I'd performance test it against a version based on a doubly linked list instead of the Queue.
– Ant
May 10 at 22:03















active

oldest

votes











Your Answer




StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
);
);
, "mathjax-editing");

StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "196"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f193370%2fqueue-based-fixed-duration-memory-cache%23new-answer', 'question_page');

);

Post as a guest



































active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes










 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f193370%2fqueue-based-fixed-duration-memory-cache%23new-answer', 'question_page');

);

Post as a guest













































































Popular posts from this blog

Greedy Best First Search implementation in Rust

Function to Return a JSON Like Objects Using VBA Collections and Arrays

C++11 CLH Lock Implementation