Caching is an effective method to enhance the performance of an application. Traditionally, implementing caching requires interacting with the API of the caching framework (such as MemoryCache) or database (like Redis). It also involves incorporating moderately complex logic into your source code to generate the cache key, verify the item's existence in the cache, and add the item to the cache. Additional complexity arises from the necessity to remove items from the cache when the source data is updated. Manual caching implementation is not only time-consuming but also prone to errors, as it is easy to generate inconsistent cache keys between read and update methods.
Metalama Caching offers several advantages over manual caching:
Reduced boilerplate: Metalama Caching enables you to cache the return value of a method as a function of its arguments with just a custom attribute, specifically the [Cache] aspect. To invalidate the cache, add the [InvalidateCache] aspect to the update methods. To use a custom class as a parameter of a cached method, apply the [CacheKey] aspect to mark the properties that uniquely identify the object. Consequently, your business code becomes shorter and more readable.
Reduced bugs: Manually generating cache keys with hand-written code is notorious for being bug-prone. Metalama Caching eliminates this source of defects by implementing a reliable approach to key generation, combining object-oriented and aspect-oriented techniques.
Reduced coupling: Cache invalidation can be complex and often requires you to review your complete write methods every time you add caching to a read method. Cache dependencies act as an abstraction layer between read and write methods, reducing coupling between them.
Flexible topologies: Metalama Caching supports several caching topologies, allowing you to switch between them effortlessly:
- In-memory caching,
- Redis-based distributed caching (see Using Redis as a distributed server),
- Redis-based distributed caching with a synchronized in-memory L1 (see Using Redis as a distributed server), and
- In-memory caching with multi-node synchronization over Azure Service Bus or Redis Pub/Sub (see Synchronizing local in-memory caches for multiple servers).
In this chapter
|Getting started with Metalama Caching||This article demonstrates how to cache method return values.|
|Invalidating the cache||This article illustrates how to declaratively and imperatively invalidate cached method return values.|
|Working with cache dependencies||This article explains how to automatically invalidate cache items using cache dependencies.|
|Customizing cache keys||This article provides guidance on customizing the cache keys that identify cached method return values.|
|Using Redis as a distributed server||This article shows how to use Redis as a distributed cache.|
|Synchronizing local in-memory caches for multiple servers||This article demonstrates how to invalidate all related in-memory caches in a distributed environment.|
|Caching mutable or stream-like types with value adapters||This article describes how to cache return values of methods that cannot be directly cached, such as instances of <xref:System.Collections.Generic.IEnumerable`1> or <xref:System.IO.Stream>.|
|Preventing concurrent execution of cached methods||This article explains how to prevent the same method from being executed with the same arguments simultaneously by using locking.|
|Troubleshooting Metalama Caching||This article details how to add logging to the caching component.|