Open sandboxFocusImprove this doc

Using Redis as a distributed cache

Note

This feature requires a Metalama Professional license.

If you have a distributed application where several instances run in parallel, Redis is an excellent choice for implementing caching due to the following reasons:

  1. In-Memory Storage: Redis stores its dataset in memory, allowing for very fast read and write operations, which are significantly faster than disk-based databases.
  2. Rich Data Structures and Atomic Operations: Redis is not just a simple key-value store; it supports multiple data structures like strings, hashes, lists, sets, sorted sets, and more. Combined with Redis's support for atomic operations on these complex data types, Metalama Caching can implement support for cache dependencies (see Working with cache dependencies).
  3. Scalability and Replication: Redis provides features for horizontal partitioning or sharding. As your dataset grows, you can distribute it across multiple Redis instances. Redis supports multi-instance replication, allowing for data redundancy and higher data availability. If the master fails, a replica can be promoted to master, ensuring that the system remains available.
  4. Pub/Sub: Thanks to the Redis Pub/Sub feature, Metalama can synchronize the distributed Redis cache with a local in-memory L1 cache. Metalama can also use this feature to synchronize several local in-memory caches without using Redis storage.

Our implementation uses the StackExchange.Redis library internally and is compatible with on-premises instances of Redis Cache as well as with the Azure Redis Cache cloud service.

When used with Redis, Metalama Caching supports the following features:

  • Distributed caching,
  • Non-blocking cache write operations,
  • In-memory L1 cache in front of the distributed L2 cache, and
  • Synchronization of several in-memory caches using Redis Pub/Sub.

This article covers all these topics.

Configuring the Redis server

The first step is to prepare your Redis server for use with Metalama caching. Follow these steps:

  1. Set up the eviction policy to volatile-lru or volatile-random. See https://redis.io/topics/lru-cache#eviction-policies for details.

    Caution

    Other eviction policies than volatile-lru or volatile-random are not supported.

  2. Set up the key-space notification to include the AKE events. See https://redis.io/topics/notifications#configuration for details.

Configuring the caching backend in Metalama

The second step is to configure Metalama Caching to use Redis.

With dependency injection

Follow these steps:

  1. Add a reference to the Metalama.Patterns.Caching.Backends.Redis package.

  2. Create a StackExchange.Redis.ConnectionMultiplexer and add it to the service collection as a singleton of the IConnectionMultiplexer interface type.

    23        // Add Redis.                                                          
    24        builder.Services.AddSingleton<IConnectionMultiplexer>( _ =>
    25        {
    26            var redisConnectionOptions = new ConfigurationOptions();
    27            redisConnectionOptions.EndPoints.Add( endpoint.Address, endpoint.Port );
    28
    29            return ConnectionMultiplexer.Connect( redisConnectionOptions );
    30        } );
    31
    Note

    If you are using .NET Aspire, simply call UseRedis().

  3. Go back to the code that initialized Metalama Caching by calling serviceCollection.AddMetalamaCaching. Call the WithBackend method, and supply a delegate that calls the Redis method.

    Here is an example of the AddMetalamaCaching code.

    35        // Add the caching service.                         
    36        builder.Services.AddMetalamaCaching( caching
    37                                                 => caching
    38                                                     .WithBackend( backend => backend.Redis() ) );
    39
  4. We recommend initializing the caching service during the initialization sequence of your application, otherwise the service will be initialized lazily upon first use. Get the ICachingService interface from the IServiceProvider and call the InitializeAsync method.

    50        // Initialize caching.
    51        await app.Services.GetRequiredService<ICachingService>().InitializeAsync();
    52

Example: caching using Redis

Here's an update of the example used in Getting started with Metalama Caching, modified to use Redis instead of MemoryCache as the caching back-end.

Source Code
1using Metalama.Patterns.Caching.Aspects;
2using System;

3
4namespace Doc.Redis;

5
6public sealed class CloudCalculator
7{
8    public int OperationCount { get; private set; }
9
10    [Cache]
11    public int Add( int a, int b )
12    {
13        Console.WriteLine( "Doing some very hard work." );
14










15        this.OperationCount++;
16
17        return a + b;
18    }
19}
Transformed Code
1using Metalama.Patterns.Caching;
2using Metalama.Patterns.Caching.Aspects;
3using System;
4using System.Reflection;
5
6namespace Doc.Redis;
7
8public sealed class CloudCalculator
9{
10    public int OperationCount { get; private set; }
11
12    [Cache]
13    public int Add(int a, int b)
14    {
15        static object? Invoke(object? instance, object?[] args)
16        {
17            return ((CloudCalculator)instance).Add_Source((int)args[0], (int)args[1]);
18        }
19
20        return _cachingService.GetFromCacheOrExecute<int>(_cacheRegistration_Add, this, new object[] { a, b }, Invoke);
21    }
22
23    private int Add_Source(int a, int b)
24    {
25        Console.WriteLine("Doing some very hard work.");
26
27        this.OperationCount++;
28
29        return a + b;
30    }
31
32    private static readonly CachedMethodMetadata _cacheRegistration_Add;
33    private ICachingService _cachingService;
34
35    static CloudCalculator()
36    {
37        _cacheRegistration_Add = CachedMethodMetadata.Register(typeof(CloudCalculator).GetMethod("Add", BindingFlags.Public | BindingFlags.Instance, null, new[] { typeof(int), typeof(int) }, null) ?? throw new MissingMethodException("The method 'CloudCalculator.Add(int, int)' could not be found using reflection."), new CachedMethodConfiguration() { AbsoluteExpiration = null, AutoReload = null, IgnoreThisParameter = null, Priority = null, ProfileName = (string?)null, SlidingExpiration = null }, false);
38    }
39
40    public CloudCalculator(ICachingService? cachingService = null)
41    {
42        this._cachingService = cachingService ?? throw new System.ArgumentNullException(nameof(cachingService));
43    }
44}
1using Metalama.Documentation.Helpers.ConsoleApp;
2using Metalama.Documentation.Helpers.Redis;
3using Metalama.Patterns.Caching;
4using Metalama.Patterns.Caching.Backends.Redis;
5using Metalama.Patterns.Caching.Building;
6using Microsoft.Extensions.DependencyInjection;
7using StackExchange.Redis;
8using System.Threading.Tasks;
9
10namespace Doc.Redis;
11
12internal static class Program
13{
14    public static async Task Main()
15    {
16        var builder = ConsoleApp.CreateBuilder();
17
18        // Add a local Redis server with a random-assigned port. You don't need this in your code.
19        using var redis = builder.Services.AddLocalRedisServer();
20        var endpoint = redis.Endpoint;
21
22        // 
23        // Add Redis.                                                          
24        builder.Services.AddSingleton<IConnectionMultiplexer>( _ =>
25        {
26            var redisConnectionOptions = new ConfigurationOptions();
27            redisConnectionOptions.EndPoints.Add( endpoint.Address, endpoint.Port );
28
29            return ConnectionMultiplexer.Connect( redisConnectionOptions );
30        } );
31
32        // 
33
34        // 
35        // Add the caching service.                         
36        builder.Services.AddMetalamaCaching( caching
37                                                 => caching
38                                                     .WithBackend( backend => backend.Redis() ) );
39
40        // 
41
42        // Add other components as usual.
43        builder.Services.AddAsyncConsoleMain<ConsoleMain>();
44        builder.Services.AddSingleton<CloudCalculator>();
45
46        // Build the host.
47        await using var app = builder.Build();
48
49        // 
50        // Initialize caching.
51        await app.Services.GetRequiredService<ICachingService>().InitializeAsync();
52
53        // 
54
55        // Run the host.
56        await app.RunAsync();
57    }
58}
1using Metalama.Documentation.Helpers.ConsoleApp;
2using System;
3using System.Threading.Tasks;
4
5namespace Doc.Redis;
6
7public sealed class ConsoleMain : IAsyncConsoleMain
8{
9    private readonly CloudCalculator _cloudCalculator;
10
11    public ConsoleMain( CloudCalculator cloudCalculator )
12    {
13        this._cloudCalculator = cloudCalculator;
14    }
15
16    public Task ExecuteAsync()
17    {
18        for ( var i = 0; i < 3; i++ )
19        {
20            var value = this._cloudCalculator.Add( 1, 1 );
21            Console.WriteLine( $"CloudCalculator returned {value}." );
22        }
23
24        Console.WriteLine(
25            $"In total, CloudCalculator performed {this._cloudCalculator.OperationCount} operation(s)." );
26
27        return Task.CompletedTask;
28    }
29}
Doing some very hard work.
CloudCalculator returned 2.
CloudCalculator returned 2.
CloudCalculator returned 2.
In total, CloudCalculator performed 1 operation(s).

Without dependency injection

If you aren't using dependency injection:

  1. Create a StackExchange.Redis.ConnectionMultiplexer.

  2. Call CachingService.Create, then the WithBackend method, and supply a delegate that calls the Redis method. Pass a RedisCachingBackendConfiguration and set the Connection property to your ConnectionMultiplexer.

Resilience and performance

The Redis caching backend includes a built-in resilience framework that handles transient failures through retry policies and exception handling policies. This replaces the previous ExceptionHandlingCachingBackendEnhancer approach used in earlier versions.

Retry policies

Retry policies control how failed Redis operations are retried. The IRetryPolicy interface defines the contract, and the default implementation RetryPolicy uses exponential backoff with jitter.

The RedisCachingBackendConfiguration exposes three retry policy properties:

Property Default Description
TransactionRetryPolicy TransactionRetryPolicy Handles retries for Redis transactions that fail due to data conflicts.
BackgroundTasksRetryPolicy BackgroundRetryPolicy Handles retries for non-blocking background operations.
BackgroundRecoveryRetryPolicy BackgroundRetryPolicy Handles retries for recovery actions such as InvalidateDependencyInBackground or RemoveItemInBackground.

The RetryPolicy class exposes the following configurable properties:

Property Type Default
BaseDelay TimeSpan 25 ms
Multiplier double 1.2
MaxDelay TimeSpan 2 s
JitterFactor double 0.2
MaxAttempts int 5
NoDelayAttempts int 1

Exception handling policies

The IExceptionHandlingPolicy interface allows you to control how exceptions are handled after all retry attempts have been exhausted. The DefaultExceptionHandlingPolicy logs exceptions and attempts to recover from failed write operations.

Set the ExceptionHandlingPolicy property to customize this behavior.

The exception handling policy receives an ExceptionInfo object describing the exception and returns a RecoveryAction indicating how to proceed:

Recovery action Description
Swallow The exception is silently consumed.
Rethrow The exception is re-thrown to the caller.
RemoveItemInBackground The cache item that caused the exception is removed asynchronously.
InvalidateDependencyInBackground The dependency that caused the exception is invalidated asynchronously.

The OperationKind enum identifies which operation triggered the exception, allowing the policy to make context-specific decisions.

Key compression

When cache keys exceed a certain length, they can cause performance issues. Redis recommends a maximum length of 1024 bytes. The Redis caching backend can automatically hash long keys using the KeyCompressingThreshold property.

When a cache key exceeds the threshold (default: 128 characters), it is hashed using the algorithm specified by the CacheKeyHashingAlgorithm enum:

Algorithm Description Max safe elements (p < 10⁻⁹)
None No hashing (default). N/A
XxHash64 64-bit xxHash — small data spaces ~200,000 (1.9 × 10⁵)
XxHash128 128-bit xxHash — any data space ~250 trillion (2.6 × 10¹⁴)

Concurrency and overload detection

The Redis caching backend manages several types of concurrent operations and provides mechanisms to prevent system overload.

Background task management

Many Redis operations (write-through for L1 caches, invalidation propagation, recovery actions) are executed in the background. The following configuration properties control concurrency:

Property Default Description
BackgroundTasksMaxConcurrency 25 Maximum number of concurrent background tasks.
BackgroundTasksOverloadedThreshold 125 Number of queued tasks above which the backend reports an overloaded state.
InvalidationMaxConcurrency 20 Maximum number of concurrent invalidation operations per call.

Overload detection

The <xref:Metalama.Patterns.Caching.Backends.Redis.RedisCachingBackend> exposes an overload detection mechanism through the <xref:Metalama.Patterns.Caching.Backends.Redis.RedisCachingBackend.IsBackgroundTaskQueueOverloaded> property and the <xref:Metalama.Patterns.Caching.Backends.Redis.RedisCachingBackend.IsBackgroundTaskQueueOverloadedChanged> event. When the number of queued background tasks exceeds the BackgroundTasksOverloadedThreshold, the backend notifies dependent components. In particular, the RedisCacheDependencyGarbageCollector temporarily stops processing real-time eviction and expiration notifications during overload to prevent further strain on the system.

Example: configuring resilience and performance

The following example shows how to customize retry policies, concurrency limits, and key compression when initializing the Redis caching backend:

33        builder.Services.AddMetalamaCaching(
34            caching => caching.WithBackend(
35                backend => backend.Redis(
36                    new RedisCachingBackendConfiguration
37                    {
38                        // Set a custom transaction retry policy with more attempts.
39                        TransactionRetryPolicy = new RetryPolicy
40                        {
41                            MaxAttempts = 10,
42                            BaseDelay = TimeSpan.FromMilliseconds( 50 ),
43                            MaxDelay = TimeSpan.FromSeconds( 5 )
44                        },
45
46                        // Set the concurrency limits for background operations.
47                        BackgroundTasksMaxConcurrency = 50,
48                        BackgroundTasksOverloadedThreshold = 200,
49
50                        // Enable key compression for keys longer than 256 characters.
51                        KeyCompressingThreshold = 256
52                    } ) ) );
53

Adding a local in-memory cache in front of your Redis cache

For higher performance, you can add an additional, in-process layer of caching (called L1) between your application and the remote Redis server (called L2).

The benefit of using an in-memory L1 cache is to decrease latency between the application and the Redis server, and to decrease CPU load due to the deserialization of objects. To further decrease latency, write operations to the L2 cache are performed in the background.

To enable the local cache, inside serviceCollection.AddMetalamaCaching, call the WithL1 method right after the Redis method.

The following snippet shows the updated AddMetalamaCaching code, with just a tiny change calling the WithL1 method.

36        // Add the caching service.                         
37        builder.Services.AddMetalamaCaching( caching
38                                                 => caching.WithBackend( backend => backend.Redis()
39                                                     .WithL1() ) );
40

When you run several nodes of your applications with the same Redis server and the same KeyPrefix, the L1 caches of each application node are synchronized using Redis notifications.

Warning

Due to the asynchronous nature of notification-based invalidation, there may be a few milliseconds during which different application nodes may see different values of cache items. However, the application instance initiating the change will have a consistent view of the cache. Short lapses of inconsistencies are generally harmless if the application clients are affinitized to one application node because each application instance has a consistent view. However, if application clients are not affinitized, they may experience cache consistency issues, and the developers who maintain it may lose a few hairs in the troubleshooting process.

Using dependencies with the Redis caching backend

Metalama Caching's Redis back-end supports dependencies (see Working with cache dependencies), but this feature is disabled by default with the Redis caching backend due to its significant performance and deployment impact:

  • From a performance perspective, the cache dependencies need to be stored in Redis (therefore consuming memory) and handled in a transactional way (therefore consuming processing power).
  • From a deployment perspective, the server requires a garbage collection service to run continuously, even when the app isn't running. This service cleans up dependencies when cache items are expired from the cache.

If you choose to enable dependencies with Redis, ensure that at least one instance of the cache GC process is running. It's legal to have several instances of this process running, but since all instances compete to process the same messages, it's better to ensure that only a small number of instances (ideally one) is running.

To enable dependencies, set the RedisCachingBackendConfiguration.SupportsDependencies property to true when initializing the Redis caching back-end.

Warning

Caching dependencies can't be used on a Redis cluster. Only the master-replica topology is supported with caching dependencies. This limitation exists because a cache operation with dependencies is implemented as a transaction of several operations, which must all reside on the same node.

Running the dependency GC process

The recommended approach to run the dependency GC process is to create an application host using the Microsoft.Extensions.Hosting namespace. The GC process implements the IHostedService interface. To add it to the application, use the AddRedisCacheDependencyGarbageCollector extension method.

In case of an outage of the service running the GC process, execute the PerformFullCollectionAsync method.

The following program demonstrates this:

1using Metalama.Documentation.Helpers.Redis;
2using Metalama.Patterns.Caching.Backends.Redis;
3using Microsoft.Extensions.DependencyInjection;
4using Microsoft.Extensions.Hosting;
5using StackExchange.Redis;
6using System;
7using System.Linq;
8using System.Threading.Tasks;
9
10
11namespace Doc.RedisGC;
12
13public sealed class Program
14{
15    public static async Task Main( string[] args )
16    {
17        var appBuilder = Host.CreateApplicationBuilder();
18
19        // Add a local Redis server with a random-assigned port. You don't need this in your code.
20        using var redis = appBuilder.Services.AddLocalRedisServer();
21        var endpoint = redis.Endpoint;
22
23        // Add the garbage collected service, implemented as IHostedService.
24        appBuilder.Services.AddRedisCacheDependencyGarbageCollector( _ =>
25        {
26            // Build the Redis connection options.
27            var redisConnectionOptions = new ConfigurationOptions();
28            redisConnectionOptions.EndPoints.Add( endpoint.Address, endpoint.Port );
29
30            // The KeyPrefix must match _exactly_ the one used by the caching back-end.
31            var keyPrefix = "TheApp.1.0.0";
32
33            return new RedisCachingBackendConfiguration
34            {
35                NewConnectionOptions = redisConnectionOptions, KeyPrefix = keyPrefix
36            };
37        } );

38
39        var host = appBuilder.Build();
40
41        await host.StartAsync();
42
43        if ( args.Contains( "--full" ) )
44        {
45            Console.WriteLine( "Performing full collection." );
46
47            var collector =
48                host.Services.GetRequiredService<RedisCacheDependencyGarbageCollector>();
49
50            await collector.PerformFullCollectionAsync();
51            Console.WriteLine( "Full collection completed." );
52        }
53
54        const bool isProductionCode = false;
55
56        if ( isProductionCode )
57        {
58            // await host.WaitForShutdownAsync();
59        }
60        else
61        {
62            // This code is automatically executed so we shut down the service after 1 second.
63            await Task.Delay( 1000 );
64            await host.StopAsync();
65        }
66    }
67}

Configuring the dependency GC

The garbage collector can be configured using the RedisCacheDependencyGarbageCollectorOptions class, which exposes the following properties:

Property Type Default Description
CacheCleanupDelay TimeSpan 4 hours Delay between subsequent periodic cleanups.
CacheCleanupOptions CacheCleanupOptions WaitDelay=100 ms, MaxConcurrency=1 Options for the periodic cleanup operation.

The CacheCleanupOptions class controls the cleanup behavior:

Property Type Default Description
WaitDelay TimeSpan 0 Delay between processing two keys.
RemediationDelay TimeSpan 10 s Delay before re-checking an inconsistency for remediation. This accounts for replication lag in distributed setups.
MaxConcurrency int 20 Maximum number of keys analyzed concurrently.
Dry bool false When true, reports errors without attempting to fix them.