DEV Community

Cover image for When SharedPreferences Fails: Architecting Resilient Cache Infrastructure for Production Flutter Apps
Mahmoud Alatrash
Mahmoud Alatrash

Posted on

When SharedPreferences Fails: Architecting Resilient Cache Infrastructure for Production Flutter Apps

The Problem: It Starts So Simply

You're building a Flutter app. You need to cache some data. You reach for shared_preferences.
Five minutes later, you've got a working prototype. Ship it, right?

Wrong.

Six months later, you're debugging why your app crashes on iOS Simulator, why SharedPreferences.getInstance()
randomly fails on some Android devices, and why your "secure" tokens are stored in plain text XML files.

This is Part 1 of a 3-part series on architecting fault-tolerant, secure, and scalable cache systems in Flutter.

In this part, we'll cover:

  • ✅ Why SharedPreferences fails in production (and how to detect it)
  • ✅ Circuit Breaker Pattern for zero-downtime degradation
  • ✅ LRU Eviction to prevent memory leaks (from 380MB → 52MB)
  • ✅ Clean Architecture principles for testable cache infrastructure
  • ✅ Strategy Pattern for swappable storage backends
  • ✅ Type-safe serialization with automatic JSON handling
  • ✅ Configuration as code to avoid magic numbers

By the end of this guide, you'll have a cache system that survives platform failures, memory constraints,
and high-concurrency scenarios.

You can find the full source code in the Flutter Production Architecture repository.


🚀 Quick Start: The "Hands-on" Approach

Before we dive into the deep architecture, here's how easy it is to use the final system.

1. Setup (in main.dart)

Instead of a messy initialization, we use a clean configuration object:

void main() async {
  WidgetsFlutterBinding.ensureInitialized();

  // Detect device capabilities
  final isLowEndDevice = (await getTotalMemory()) < 2 * 1024 * 1024 * 1024;

  // Initialize with environment-specific config
  await Cache.initialize(
    defaultDriver: 'shared_prefs',
    config: isLowEndDevice 
        ? CacheConfig.forLowMemoryDevices() 
        : CacheConfig.defaults(),
  );

  runApp(MyApp());
}
Enter fullscreen mode Exit fullscreen mode

2. Usage (Type-Safe & Clean)

⛔ The Old Way (Fragile):

// You have to remember key names, handle nulls, and catch errors manually
try {
  final prefs = await SharedPreferences.getInstance();
  await prefs.setString('user', jsonEncode(user.toJson()));

  final json = prefs.getString('user');
  final user = json != null ? User.fromJson(jsonDecode(json)) : null;
} catch (e) {
  print('Crash!'); // No recovery strategy
}
Enter fullscreen mode Exit fullscreen mode

✅ The New Architecture (Resilient):

// Type-safe, auto-serialized, and fault-tolerant
await Cache.set<User>('current_user', user);

// Retrieval with type safety
final user = await Cache.get<User>('current_user');

// Secure storage (auto-encrypted via Keychain/KeyStore)
await Cache.secure.set('api_token', 'xyz-123');

// Reactive listener (updates UI automatically via Observer Pattern)
Cache.watch<User>('current_user').listen((user) {
  print('User updated: ${user.name}');
});
Enter fullscreen mode Exit fullscreen mode

Part 1: The Hidden Dangers of Platform Storage

Why SharedPreferences is a Landmine

Let's start with a harsh truth: SharedPreferences is not a reliable storage mechanism out of the box.

Here's what the Flutter documentation doesn't tell you:

// This looks innocent...
final prefs = await SharedPreferences.getInstance();
await prefs.setString('token', 'abc123');

// But this can fail in production:
// 1. iOS Simulator: "MissingPluginException"
// 2. Android: Plugin channel not registered yet
// 3. Platform channel timeout during cold start
// 4. Disk full errors (no space to write)
// 5. SharedPreferences corruption after force-stop during write
Enter fullscreen mode Exit fullscreen mode

In production environments, initialization failures are rare but critical. A single failure here can crash the entire app bootstrap process. The naive approach crashes your app. The production approach? Graceful degradation with circuit breakers.


Part 2: Architecture Decisions - Clean Architecture Meets Mobile Reality

Layer Separation

Most Flutter architectures place caching logic inside Repositories. We moved it deeper. Cache is infrastructure, not business logic.

Our architecture follows Clean Architecture principles:

Key Insight: The domain layer is pure Dart - zero Flutter dependencies. This means:

  • Unit tests run in milliseconds (no async platform channels)
  • Business logic is portable to other platforms
  • Testing doesn't require mocking MethodChannels

The Layer Breakdown

lib/core/cache/
├── presentation/
│   └── cache_facade.dart                # Static API (Flutter-aware)
├── domain/
│   ├── entities/cache_config.dart       # Pure Dart value objects
│   ├── exceptions/cache_exceptions.dart # Domain exceptions
│   ├── events/cache_event.dart          # Domain events
│   ├── repositories/i_cache.dart        # Interface (contract)
│   └── strategies/cache_driver_strategy.dart
├── data/
│   ├── repositories/cache_repository_impl.dart # Concrete implementation
│   └── datasources/
│       ├── cache_drivers.dart           # Platform-specific drivers
│       ├── cache_manager.dart           # Orchestration
│       └── cache_storage.dart           # Serialization
└── utils/
    ├── cache_ttl.dart                   # TTL management
    ├── cache_validator.dart             # Validation
    └── cache_subscription_manager.dart  # Pub/Sub
Enter fullscreen mode Exit fullscreen mode

Why this matters:

The domain layer (i_cache.dart, cache_config.dart) contains zero references to:

  • SharedPreferences
  • FlutterSecureStorage
  • MethodChannel
  • Any Flutter framework classes

This means:

  1. Tests run 100x faster (no platform channel overhead)
  2. Business logic is portable (could move to CLI, server, web without changes)
  3. Dependencies point inward (domain doesn't know about Flutter)

Part 3: The Circuit Breaker Pattern - Zero-Downtime Degradation

The "Cascading Failure" Problem

Imagine this sequence:

  1. User opens app on iOS Simulator
  2. SharedPreferences.getInstance() throws MissingPluginException
  3. Cache initialization fails
  4. App bootstrap crashes. User sees a blank screen forever.

This is a cascading failure - one component's failure brings down the entire system.

Our Solution: Three-Tier Fallback Strategy

We implemented a Circuit Breaker Pattern with automatic driver fallback:

// lib/core/cache/data/datasources/cache_manager.dart
class CacheManager {
  Future<void> _initializeDrivers() async {
    // Tier 0: Memory (ALWAYS works - The Safety Net)
    _drivers[CacheDriverType.memory] = MemoryDriver();
    log('Memory driver initialized', name: 'Cache');

    // Tier 1: SharedPreferences (graceful failure)
    try {
      final prefs = await SharedPreferences.getInstance();
      _drivers[CacheDriverType.sharedPrefs] = SharedPrefsDriver(prefs);
      log('SharedPreferences driver available', name: 'Cache');
    } catch (e) {
      if (_config?.logFallbacks == true) {
        log('WARNING: Disk storage failed. Falling back to Memory.', name: 'Cache');
      }
      // App continues - memory driver is the fallback
    }

    // Tier 2: SecureStorage (optional, graceful failure)
    try {
      const storage = FlutterSecureStorage(
        iOptions: IOSOptions(
          accessibility: KeychainAccessibility.first_unlock_this_device,
        ),
      );
      _drivers[CacheDriverType.secureStorage] = SecureStorageDriver(storage);
      log('SecureStorage driver available', name: 'Cache');
    } catch (e) {
      if (_config?.logFallbacks == true) {
        log('SecureStorage unavailable: $e', name: 'Cache');
      }
    }
  }

  CacheDriver getDriver(String? driverName) {
    if (driverName != null) {
      final driverType = CacheDriverType.fromString(driverName);
      if (driverType != null && _drivers.containsKey(driverType)) {
        final driver = _drivers[driverType]!;
        if (driver.isAvailable) return driver;

        if (_config?.logFallbacks == true) {
          log('Driver $driverName unavailable, using fallback', name: 'Cache');
        }
      }
    }
    // Always fallback to memory (guaranteed to work)
    return _defaultDriver ?? _drivers[CacheDriverType.memory]!;
  }
}
Enter fullscreen mode Exit fullscreen mode

Architectural Benefit:
This ensures that even if the underlying platform storage is completely broken (e.g., during a buggy OS update, CI environment, or iOS Simulator), the app remains functional using in-memory storage for the session.

Production Impact:

  • Before Circuit Breaker: 0.3% crash rate on app launch
  • After Circuit Breaker: 0% cache-related crashes in 8 months

How Circuit Breakers Work (The Mental Model)

In electrical systems, a circuit breaker "opens" when it detects a fault, preventing damage to the system. In software, the pattern is similar:

Key Difference from Traditional Exception Handling:

// ❌ Traditional approach (brittle)
try {
  final prefs = await SharedPreferences.getInstance();
  return prefs.getString('key');
} catch (e) {
  print('Error: $e');
  return null; // App loses data, user confused
}

// ✅ Circuit Breaker approach (resilient)
CacheDriver getDriver(String? driverName) {
  // Try requested driver
  if (driverName != null && _drivers[driverName].isAvailable) {
    return _drivers[driverName];
  }

  // Automatic fallback to memory
  return _drivers[CacheDriverType.memory]; // Always works
}
Enter fullscreen mode Exit fullscreen mode

The circuit breaker automatically routes traffic to a working alternative, not just catching exceptions.


Part 4: Memory Management - The LRU Eviction Strategy

The Memory Leak We Didn't See Coming

Three months into production, our telemetry showed a disturbing pattern:

App Launch:    48MB memory
After 1 hour:  125MB memory  ← Creeping up
After 4 hours: 380MB memory  ← iOS memory warning
After 8 hours: CRASH         ← jetsam killed the app
Enter fullscreen mode Exit fullscreen mode

This wasn't a leak in the traditional sense (no retain cycles). It was unbounded growth in the MemoryDriver:

class MemoryDriver extends CacheDriver {
  final Map<String, String> _cache = {};  // ← Grows forever

  @override
  Future<void> set(String key, String value) async {
    _cache[key] = value;  // ← No removal logic
  }
}
Enter fullscreen mode Exit fullscreen mode

Understanding the Usage Pattern

When we analyzed the cache key distribution across user sessions:

Average user (90th percentile):
- Session length: 45 minutes
- Cache writes: 180 keys
- Memory impact: ~2MB
- Status: ✅ Acceptable

Power users (99th percentile):
- Session length: 8+ hours (overnight, left app open)
- Cache writes: 15,000+ keys
- Memory impact: 350MB+
- Status: ❌ App terminated by OS
Enter fullscreen mode Exit fullscreen mode

The culprit? Our feature team was caching API responses aggressively:

// Every API response cached with a unique timestamp key
await cache.set(
  'api_response_${endpoint}_${timestamp}_${userId}',
  jsonEncode(response),
);
Enter fullscreen mode Exit fullscreen mode

For a user who kept the app open all day, this created 14,000 unique keys (one every 30 seconds for feeds, notifications, etc.).

Why Standard Dart Collections Fail Here

You might think: "Just use a LinkedHashMap and limit the size."

final _cache = LinkedHashMap<String, String>();

void set(String key, String value) {
  if (_cache.length >= 1000) {
    _cache.remove(_cache.keys.first);  // ← Seems simple
  }
  _cache[key] = value;
}
Enter fullscreen mode Exit fullscreen mode

Problem: This is FIFO (First-In-First-Out), not LRU (Least Recently Used).

If you write:

cache.set('user_config', configJson);  // Written at startup
// ... 1,000 other writes happen ...
cache.get('user_config');             // Still frequently accessed!
Enter fullscreen mode Exit fullscreen mode

FIFO evicts 'user_config' after 1,000 writes, even if it's accessed every minute.

LRU keeps frequently accessed items and evicts the truly unused ones.

The LinkedHashMap LRU Pattern

Dart's LinkedHashMap maintains insertion order, which we exploit for LRU:

class MemoryDriver extends CacheDriver {
  final LinkedHashMap<String, String> _cache = LinkedHashMap();
  final int maxEntries;

  MemoryDriver({this.maxEntries = 1000});

  @override
  Future<void> set(String key, String value) async {
    // Step 1: Evict if at capacity (LRU policy)
    if (_cache.length >= maxEntries && !_cache.containsKey(key)) {
      final evictedKey = _cache.keys.first;  // Oldest entry
      _cache.remove(evictedKey);
      log('LRU EVICT: $evictedKey (size: ${_cache.length}/$maxEntries)', name: 'Cache');
    }

    // Step 2: Move to end (mark as recently used)
    _cache.remove(key);  // Remove from current position
    _cache[key] = value; // Re-insert at end

    log('Memory SET: $key (size: ${_cache.length}/$maxEntries)', name: 'Cache');
  }

  @override
  Future<String?> get(String key) async {
    final value = _cache.remove(key);  // Remove from current position

    if (value != null) {
      _cache[key] = value;  // Re-insert at end (mark as recently used)
      log('Memory HIT: $key (size: ${_cache.length}/$maxEntries)', name: 'Cache');
    } else {
      log('Memory MISS: $key', name: 'Cache');
    }

    return value;
  }
}
Enter fullscreen mode Exit fullscreen mode

How This Works Internally

LinkedHashMap maintains two data structures simultaneously:

Data Structure:

When you do _cache.remove(key); _cache[key] = value;:

  1. remove(key) unlinks the node from its current position in the linked list (O(1)) because the hash table provides direct node access
  2. [key] = value inserts the node at the tail of the linked list (O(1))
  3. keys.first always returns the head of the linked list—the least recently used entry (O(1))

This gives us O(1) reads, writes, and evictions—perfect for a cache.

The Algorithm in Action

Let's trace through a sequence of operations with maxEntries = 3:

// Initial state: Empty
cache.set('A', 'value_a'); // List: [A]
cache.set('B', 'value_b'); // List: [A, B]
cache.set('C', 'value_c'); // List: [A, B, C] ← Full

// Access 'A' (moves to end)
cache.get('A');            // List: [B, C, A]

// Add new item 'D' (evicts 'B' - least recently used)
cache.set('D', 'value_d'); // List: [C, A, D] (B evicted)

// Access 'C' (moves to end)
cache.get('C');            // List: [A, D, C]

// Add new item 'E' (evicts 'A')
cache.set('E', 'value_e'); // List: [D, C, E] (A evicted)
Enter fullscreen mode Exit fullscreen mode

Key Insight: The most recently accessed items survive eviction, regardless of when they were originally added.

Production Impact: Before vs After

Before LRU (unbounded growth):

Device: iPhone 11 (4GB RAM)
Session: 6 hours (user fell asleep with app open)
Result: 
  - Memory: 420MB (98% of available)
  - Cache keys: 18,400
  - Outcome: App killed by jetsam
  - Crash rate: 0.3% of sessions
Enter fullscreen mode Exit fullscreen mode

After LRU (1,000 entry limit):

Device: iPhone 11 (4GB RAM)
Session: 6 hours
Result:
  - Memory: 52MB (stable)
  - Cache keys: 1,000 (capped)
  - Cache hit rate: 94% (LRU kept the right data)
  - Outcome: App healthy
  - Crash rate: 0.0%
Enter fullscreen mode Exit fullscreen mode

Architectural Insight: This is the same algorithm used in:

  • CPU L1/L2 caches (hardware)
  • Redis LRU eviction (databases)
  • CDN edge caches (Cloudflare, Fastly)

We're applying a decades-old systems algorithm to solve a mobile-specific problem.

Why 1,000 Entries?

We benchmarked different limits across device tiers:

Max Entries Memory Usage (Avg) Hit Rate Evictions/Hour Device Tier
100 5MB 78% 450 Low-end (<2GB RAM)
500 26MB 89% 120 Mid-range (2-4GB)
1,000 52MB 94% 45 High-end (>4GB)
5,000 260MB 96% 12 Tablets/iPads

The sweet spot: 1,000 entries gives us:

  • 94% hit rate (users rarely notice cache misses)
  • 52MB memory footprint (acceptable on modern devices)
  • 45 evictions/hour (low churn, stable performance)

Part 7: The Strategy Pattern - Swappable Storage Backends

Early versions used magic strings for driver selection:

// ❌ BAD: Magic strings everywhere
await cache.set('key', 'value', driver: 'shared_prefs'); // Typo-prone
Enter fullscreen mode Exit fullscreen mode

We replaced this with a type-safe Strategy Pattern:

// lib/core/cache/domain/strategies/cache_driver_strategy.dart
enum CacheDriverType {
  memory('memory'),
  sharedPrefs('shared_prefs'),
  secureStorage('secure_storage');

  const CacheDriverType(this.value);
  final String value;

  static CacheDriverType? fromString(String? value) {
    if (value == null) return null;

    try {
      return values.firstWhere((t) => t.value == value);
    } catch (e) {
      throw ArgumentError(
        'Invalid cache driver: "$value". Valid drivers are: ${values.map((t) => t.value).join(", ")}',
      );
    }
  }
}

// Abstract strategy interface
abstract class CacheDriver {
  CacheDriverType get type;
  String get name => type.value;
  bool get isAvailable;

  Future<void> set(String key, String value);
  Future<String?> get(String key);
  Future<bool> has(String key);
  Future<void> remove(String key);
  Future<void> clear();
  Future<List<String>> keys();
}
Enter fullscreen mode Exit fullscreen mode

Why the Strategy Pattern Matters

The Strategy Pattern allows us to swap algorithms (storage backends) at runtime without changing client code.

Before (Tight Coupling):

// App code is tightly coupled to SharedPreferences
final prefs = await SharedPreferences.getInstance();
await prefs.setString('key', 'value');

// To switch to Hive, you'd need to find and replace every SharedPreferences call
final box = await Hive.openBox('cache');
await box.put('key', 'value');
Enter fullscreen mode Exit fullscreen mode

After (Strategy Pattern):

// App code depends on interface, not implementation
await Cache.set('key', 'value', driver: 'shared_prefs');

// To switch to Hive, just implement CacheDriver interface
class HiveDriver extends CacheDriver {
  @override
  Future<void> set(String key, String value) async {
    final box = await Hive.openBox('cache');
    await box.put(key, value);
  }
  // ... rest of interface
}

// Register the new driver
_drivers[CacheDriverType.sharedPrefs] = HiveDriver();
// That's it! All existing code works with zero changes
Enter fullscreen mode Exit fullscreen mode

The Three Concrete Strategies

1. MemoryDriver (In-Memory Strategy):

class MemoryDriver extends CacheDriver {
  final LinkedHashMap<String, String> _cache = LinkedHashMap();

  @override
  bool get isAvailable => true; // Always available

  @override
  Future<void> set(String key, String value) async {
    // LRU eviction logic (shown earlier)
  }
}
Enter fullscreen mode Exit fullscreen mode

2. SharedPrefsDriver (Persistent Strategy):

class SharedPrefsDriver extends CacheDriver {
  final SharedPreferences _prefs;

  SharedPrefsDriver(this._prefs);

  @override
  bool get isAvailable => true; // Checked during initialization

  @override
  Future<void> set(String key, String value) async {
    await _prefs.setString(key, value);
  }
}
Enter fullscreen mode Exit fullscreen mode

3. SecureStorageDriver (Encrypted Strategy):

class SecureStorageDriver extends CacheDriver {
  final FlutterSecureStorage _storage;

  SecureStorageDriver(this._storage);

  @override
  bool get isAvailable => true; // Checked during initialization

  @override
  Future<void> set(String key, String value) async {
    await _storage.write(key: key, value: value);
  }
}
Enter fullscreen mode Exit fullscreen mode

Benefits:

  1. Type Safety: Compile-time errors for invalid driver names
  2. Swappable: Can replace SharedPreferences with Hive without changing UI code
  3. Testable: Easy to inject mock drivers for testing
  4. Discoverable: Enum lists all available drivers

Example - Swapping to Hive:

class HiveDriver extends CacheDriver {
  @override
  CacheDriverType get type => CacheDriverType.sharedPrefs;

  // Implement using Hive instead of SharedPreferences
  // UI code doesn't change at all!
}
Enter fullscreen mode Exit fullscreen mode

Architectural Insight: The Strategy Pattern is one of the Gang of Four design patterns. It's used by:

  • Payment processors (Stripe, PayPal, Apple Pay - same interface, different implementations)
  • Compression libraries (gzip, brotli, zstd - same API, different algorithms)
  • Logging frameworks (console, file, network - same log() call, different outputs)

Part 10: Type-Safe Serialization - Generics Done Right

Flutter's SharedPreferences only stores primitives (String, int, bool, double). We needed to store complex objects.

The Challenge:

// ❌ This doesn't compile
await prefs.set('user', userObject); // SharedPreferences can't store objects
Enter fullscreen mode Exit fullscreen mode

Our Solution: Automatic JSON Serialization

// lib/core/cache/data/datasources/cache_storage.dart
class CacheSerializer {
  static String serialize<T>(T value) {
    // Primitives
    if (value is String) return value;
    if (value is int) return value.toString();
    if (value is double) return value.toString();
    if (value is bool) return value.toString();

    // Collections
    if (value is Map || value is List) {
      return jsonEncode(value);
    }

    // Custom objects (convention: must have toJson())
    try {
      final json = (value as dynamic).toJson();
      return jsonEncode(json);
    } catch (e) {
      throw CacheSerializationException(
        type: T,
        value: value,
        message: 'Type $T must implement toJson() for serialization',
      );
    }
  }

  static T deserialize<T>(String raw) {
    // Primitives
    if (T == String) return raw as T;
    if (T == int) return int.parse(raw) as T;
    if (T == double) return double.parse(raw) as T;
    if (T == bool) return (raw == 'true') as T;

    // JSON types
    if (T == Map || T == List) {
      return jsonDecode(raw) as T;
    }

    // Generic JSON (when type is dynamic)
    try {
      return jsonDecode(raw) as T;
    } catch (e) {
      throw CacheSerializationException(
        type: T,
        message: 'Failed to deserialize type $T from: $raw',
        cause: e,
      );
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

This enables type-safe usage:

// Store complex objects
final user = User(name: 'Alice', age: 30);
await Cache.set<User>('current_user', user);

// Retrieve with type safety
final user = await Cache.get<User>('current_user'); // Returns User, not Map!

// Collections work too
await Cache.set<List<String>>('tags', ['flutter', 'dart']);
final tags = await Cache.get<List<String>>('tags'); // Returns List<String>
Enter fullscreen mode Exit fullscreen mode

Why Generics Matter

Without Generics (Type Unsafe):

// ❌ Brittle: No compile-time safety
dynamic user = await cache.get('current_user');
print(user.name); // Runtime error if user is null or not User type
Enter fullscreen mode Exit fullscreen mode

With Generics (Type Safe):

// ✅ Safe: Compiler enforces type
User? user = await cache.get<User>('current_user');
print(user?.name); // Compiler forces null check
Enter fullscreen mode Exit fullscreen mode

The Convention: toJson() and fromJson()

For custom objects, we follow the json_serializable convention:

class User {
  final String name;
  final int age;

  User({required this.name, required this.age});

  // Serialization method (required for Cache.set)
  Map<String, dynamic> toJson() => {
    'name': name,
    'age': age,
  };

  // Deserialization factory (required for Cache.get)
  factory User.fromJson(Map<String, dynamic> json) => User(
    name: json['name'],
    age: json['age'],
  );
}

// Now this "just works"
await Cache.set<User>('user', user);
final retrieved = await Cache.get<User>('user');
Enter fullscreen mode Exit fullscreen mode

Why this convention?

  • It's the standard used by json_serializable package
  • It's self-documenting (you know how to serialize by looking at the class)
  • It's testable (you can unit test toJson() and fromJson() independently)

Benefits:

  1. Type Safety: Compile-time guarantees
  2. Auto Serialization: No manual JSON encoding
  3. Helpful Errors: Clear messages when serialization fails
// If you forget toJson(), you get a clear error:
class BadUser {
  final String name;
  // No toJson() method!
}

await Cache.set<BadUser>('user', badUser);
// Throws: CacheSerializationException: Type BadUser must implement toJson() for serialization
Enter fullscreen mode Exit fullscreen mode

Part 12: Configuration as Code - Avoiding Magic Numbers

Avoid magic numbers scattered throughout code. We used a Value Object for configuration:

// lib/core/cache/domain/entities/cache_config.dart
class CacheConfig {
  /// Enable Time-To-Live functionality
  final bool enableTTL;

  /// Maximum key length for validation
  final int maxKeyLength;

  /// Log driver fallbacks for operational monitoring
  final bool logFallbacks;

  /// Maximum entries in memory cache (LRU limit)
  final int memoryMaxEntries;

  const CacheConfig({
    this.enableTTL = true,
    this.maxKeyLength = 250,
    this.logFallbacks = true,
    this.memoryMaxEntries = 1000,
  });

  /// Default production configuration
  factory CacheConfig.defaults() => const CacheConfig();

  /// For low-memory devices (< 2GB RAM)
  factory CacheConfig.forLowMemoryDevices() => const CacheConfig(
    memoryMaxEntries: 300,
    maxKeyLength: 100,
  );

  /// For high-end devices (> 6GB RAM)
  factory CacheConfig.forHighEndDevices() => const CacheConfig(
    memoryMaxEntries: 5000,
  );

  /// Development environment (verbose logging)
  factory CacheConfig.development() => const CacheConfig(
    logFallbacks: true,
  );

  /// Production environment (minimal logging)
  factory CacheConfig.production() => const CacheConfig(
    logFallbacks: false,
  );

  @override
  String toString() =>
      'CacheConfig(ttl: $enableTTL, maxKeyLength: $maxKeyLength, '
      'logFallbacks: $logFallbacks, memoryMaxEntries: $memoryMaxEntries)';
}
Enter fullscreen mode Exit fullscreen mode

Why Configuration as Code?

Before (Magic Numbers):

// ❌ Scattered throughout codebase
if (_cache.length >= 1000) { // What's special about 1000?
  // evict
}

if (key.length > 250) { // Why 250?
  throw Exception('Key too long');
}

log('Fallback detected'); // Log in production? Development?
Enter fullscreen mode Exit fullscreen mode

After (Configuration Object):

// ✅ Centralized, self-documenting
if (_cache.length >= config.memoryMaxEntries) {
  // evict (config explains the limit)
}

if (key.length > config.maxKeyLength) {
  throw CacheException('Key too long: max ${config.maxKeyLength}');
}

if (config.logFallbacks) {
  log('Fallback detected');
}
Enter fullscreen mode Exit fullscreen mode

Usage with Dependency Injection:

// main_dev.dart (Development)
void main() async {
  await Cache.initialize(
    config: CacheConfig.development(),
  );
  runApp(MyApp());
}

// main_prod.dart (Production)
void main() async {
  await Cache.initialize(
    config: CacheConfig.production(),
  );
  runApp(MyApp());
}
Enter fullscreen mode Exit fullscreen mode

Adaptive Configuration Based on Device

void main() async {
  WidgetsFlutterBinding.ensureInitialized();

  // Detect device capabilities
  final totalMemory = await getTotalMemory();
  final config = totalMemory < 2 * 1024 * 1024 * 1024
      ? CacheConfig.forLowMemoryDevices()
      : CacheConfig.defaults();

  await Cache.initialize(config: config);

  runApp(MyApp());
}
Enter fullscreen mode Exit fullscreen mode

This allows the same codebase to run optimally on a $100 Android phone (300 cache entries) and a $1,000 iPhone (1,000 entries).

Benefits:

  1. Type Safety: No string keys for config
  2. Discoverability: IDE autocomplete shows all options
  3. Environment-Specific: Different configs per environment
  4. Testable: Easy to inject test configs
// In tests
testWidgets('Cache respects config limits', (tester) async {
  await Cache.initialize(
    config: CacheConfig(memoryMaxEntries: 10), // Small limit for testing
  );

  // Test LRU behavior with predictable limit
});
Enter fullscreen mode Exit fullscreen mode

What's Next: The Security Layer

We've built a resilient cache that survives platform failures and memory constraints. Our system:

  • ✅ Degrades gracefully via Circuit Breakers (0% crash rate)
  • ✅ Prevents memory leaks with LRU Eviction (52MB vs 380MB)
  • ✅ Provides type-safe serialization and clean interfaces
  • ✅ Uses Strategy Pattern for swappable backends
  • ✅ Configures behavior via code, not magic numbers

But there's a critical problem we haven't solved yet:

Your data is still stored in plain text.

When a security auditor showed us this on a rooted Android device:

adb shell
cat /data/data/com.yourapp/shared_prefs/FlutterSharedPreferences.xml
Enter fullscreen mode Exit fullscreen mode
<string name="flutter.jwt_token">eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...</string>
<string name="flutter.api_key">sk_live_51HqK9...</string>
Enter fullscreen mode Exit fullscreen mode

We got marked as a P0 Critical security issue.

Then the auditor used the extracted JWT token to authenticate API requests from Postman and access user
private data. Finding: Authentication tokens stored unencrypted.

In Part 2, we'll dive deep into:

  • 🔐 iOS Keychain and Android KeyStore internals (hardware-backed encryption)
  • 🔐 The 50-100x performance tax of secure storage (and when it's worth it)
  • 🔐 Defense-in-depth strategies against root access, forensic tools, and malware
  • 🔐 Security-aware exception design (handling Keychain failures gracefully)
  • 🔐 Real-world attack scenarios and how to protect against them

The question isn't "Should I encrypt?" It's "What should I encrypt, and how?"

Because encrypting everything is too slow. Encrypting nothing is too risky. We need a strategy.


📖 Continue to Part 2: The JWT Token Incident - Why Your Flutter App's Cache Isn't Secure (And How to Fix It)

🐙 Star the repo: Flutter Production Architecture on GitHub

Tags: #Flutter #Cache #CircuitBreaker #LRU #CleanArchitecture #Mobile


Top comments (0)